grad(func, x, method="Richardson", method.args=list(), ...) ## S3 method for class 'default':
grad(func, x, method="Richardson",
method.args=list(eps=1e-4, d=0.0001,
zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE), ...)
"Richardson"
or "simple"
indicating
the method to use for the approximation.func
.
WARNING: None of these should have names matching other arguments of this function.grad
calculates a numerical approximation of the
first derivative of func
at the point x
. Any additional
arguments in ...are also passed to func
, but the gradient is not
calculated with respect to these additional arguments.
It is assumed func
is a scalar value function. If a vector x
produces a scalar
result then grad
returns the numerical approximation of the gradient
at the point x
(which has the same length as x
).
If a vector x
produces a vector result then the result must have the
same length as x
, and it is assumed that this corresponds to applying
the function to each of its arguments (for example, sin(x)
).
In this case grad
returns the
gradient at each of the points in x
(which also has the same length
as x
- so be careful). An alternative for vector valued functions is
provided by jacobian
.
If method is "simple", the calculation is done using a simple epsilon
difference. For this case, only the element eps
of methods.args
is used. If method is "Richardson", the calculation
is done by Richardson's extrapolation (see e.g. Linfield and Penny, 1989,
or Fornberg and Sloan, 1994.)
This method should be used if accuracy, as opposed to speed, is important.
For this case,
methods.args=list(eps=1e-4, d=0.01, zero.tol=100*.Machine$double.eps,
r=4, show.details=FALSE)
are used.
d
gives the fraction of x
to use for the initial numerical
approximation. The default means the initial approximation uses
0.0001 * x
.
eps
is used instead of d
for elements of x
which are
zero (absolute value less than zero.tol).
zero.tol
tolerance used for deciding which elements of x
are
zero.
r
gives the number of Richardson improvement iterations (repetitions
with successly smaller d
. The default 4
general provides
good results, but this can be increased to 6
for improved
accuracy at the cost of more evaluations.
v
gives the reduction factor.
show.details
is a logical indicating if detailed calculations should
be shown.
The general approach in the Richardson method is to iterate for r
iterations from initial
values for interval value d
, using reduced factor v
.
The the first order approximation to the derivative with respect
to $x_{i}$ is
$$f'_{i}(x) = r
times with successively smaller d
and
then Richardson extraplolation is applied.
If elements of x
are near zero the multiplicative interval calculation
using d
does not work, and for these elements an additive calculation
using eps
is done instead. The argument zero.tol
is used
determine if an element should be considered too close to zero.
In the iterations, interval is successively reduced to eventual
be d/v^r
and the square of this value is used in second derivative
calculations (see genD
) so the
default zero.tol=sqrt(.Machine$double.eps/7e-7)
is set to ensure the
interval is bigger than .Machine$double.eps
with the default d
,
r
, and v
.
Fornberg and Sloan (Acta Numerica, 1994, p. 203-267)
jacobian
,
hessian
,
genD
,
numericDeriv
grad(sin, pi)
grad(sin, (0:10)*2*pi/10)
func0 <- function(x){ sum(sin(x)) }
grad(func0 , (0:10)*2*pi/10)
func1 <- function(x){ sin(10*x) - exp(-x) }
curve(func1,from=0,to=5)
x <- 2.04
numd1 <- grad(func1, x)
exact <- 10*cos(10*x) + exp(-x)
c(numd1, exact, (numd1 - exact)/exact)
x <- c(1:10)
numd1 <- grad(func1, x)
exact <- 10*cos(10*x) + exp(-x)
cbind(numd1, exact, (numd1 - exact)/exact)
Run the code above in your browser using DataLab