f
  using a Newton-type algorithm.  See the references for details.
nlm(f, p, ..., hessian = FALSE, typsize = rep(1, length(p)), fscale = 1, print.level = 0, ndigit = 12, gradtol = 1e-6, stepmax = max(1000 * sqrt(sum((p/typsize)^2)), 1000), steptol = 1e-6, iterlim = 100, check.analyticals = TRUE)p followed by any other arguments specified by
    the ... argument.    If the function value has an attribute called gradient or
    both gradient and hessian attributes, these will be
    used in the calculation of updated parameter values.  Otherwise,
    numerical derivatives are used. deriv returns a
    function with suitable gradient attribute and optionally a
    hessian attribute.
f.TRUE, the hessian of f
    at the minimum is returned.f at the minimum.0 means that no printing occurs, a value of 1
    means that initial and final details are printed and a value
    of 2 means that full tracing information is printed.f.f in each direction
    p[i] divided by the relative change in p[i].stepmax is used to prevent steps which
    would cause the optimization function to overflow, to prevent the
    algorithm from leaving the area of interest in parameter space, or to
    detect divergence in the algorithm. stepmax would be chosen
    small enough to prevent the first two of these occurrences, but should
    be larger than any anticipated reasonable step.f.f is obtained.f.f (if
    requested).estimate.  Either estimate is an approximate local
        minimum of the function or steptol is too small.stepmax exceeded five consecutive
        times.  Either the function is unbounded below,
        becomes asymptotic to a finite value from above in
        some direction or stepmax is too small.... must be matched exactly.  If a gradient or hessian is supplied but evaluates to the wrong mode
  or length, it will be ignored if check.analyticals = TRUE (the
  default) with a warning.  The hessian is not even checked unless the
  gradient is present and passes the sanity checks.
From the three methods available in the original source, we always use method 1 which is line search.
  The functions supplied should always return finite (including not
  NA and not NaN) values: for the function value itself
  non-finite values are replaced by the maximum positive value with a warning.
Schnabel, R. B., Koontz, J. E. and Weiss, B. E. (1985) A modular system of algorithms for unconstrained minimization. ACM Trans. Math. Software, 11, 419--440.
optim and nlminb.  constrOptim for constrained optimization,
  optimize for one-dimensional
  minimization and uniroot for root finding.
  deriv to calculate analytical derivatives.
  For nonlinear regression, nls may be better.
f <- function(x) sum((x-1:length(x))^2)
nlm(f, c(10,10))
nlm(f, c(10,10), print.level = 2)
utils::str(nlm(f, c(5), hessian = TRUE))
f <- function(x, a) sum((x-a)^2)
nlm(f, c(10,10), a = c(3,5))
f <- function(x, a)
{
    res <- sum((x-a)^2)
    attr(res, "gradient") <- 2*(x-a)
    res
}
nlm(f, c(10,10), a = c(3,5))
## more examples, including the use of derivatives.
## Not run: demo(nlm)
Run the code above in your browser using DataLab