The Newton-Raphson method use the gradient and the hessian of a function. For well behaved functions, it is extremely accurate.
newton(
fun,
coefs,
trace = 0,
direction = c("min", "max"),
tol = sqrt(.Machine$double.eps),
maxit = 500,
...
)
a numeric vector, the parameters at the optimum of the function.
the function to optimize
a vector of starting values
if positive or true, some information about the computation is printed
either "min"
or "max"
the tolerance
maximum number of iterations
further arguments, passed to fun