# genD

##### Generate Bates and Watts D Matrix

Generate a matrix of function derivative information.

- Keywords
- multivariate

##### Usage

```
genD(func, x, method="Richardson",
method.args=list(), ...)
# S3 method for default
genD(func, x, method="Richardson",
method.args=list(), ...)
```

##### Arguments

- func
a function for which the first (vector) argument is used as a parameter vector.

- x
The parameter vector first argument to

`func`

.- method
one of

`"Richardson"`

or`"simple"`

indicating the method to use for the aproximation.- method.args
arguments passed to method. See

`grad`

. (Arguments not specified remain with their default values.)- ...
any additional arguments passed to

`func`

. WARNING: None of these should have names matching other arguments of this function.

##### Details

The derivatives are calculated numerically using Richardson improvement.
Methods "simple" and "complex" are not supported in this function.
The "Richardson" method calculates a numerical approximation of the first
and second derivatives of `func`

at the point `x`

.
For a scalar valued function these are the gradient vector and
Hessian matrix. (See `grad`

and `hessian`

.)
For a vector valued function the first derivative is the Jacobian matrix
(see `jacobian`

).
For the Richardson method
```
method.args=list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine$double.eps/7e-7),
r=4, v=2)
```

is set as the default.
See `grad`

for more details on the Richardson's extrapolation parameters.

A simple approximation to the first order derivative with respect to \(x_i\) is

$$f'_{i}(x) = <f(x_{1},\dots,x_{i}+d,\dots,x_{n}) - f(x_{1},\dots,x_{i}-d,\dots,x_{n})>/(2*d)$$

A simple approximation to the second order derivative with respect to \(x_i\) is

$$f''_{i}(x) = <f(x_{1},\dots,x_{i}+d,\dots,x_{n}) - 2 *f(x_{1},\dots,x_{n}) + f(x_{1},\dots,x_{i}-d,\dots,x_{n})>/(d^2) $$

The second order derivative with respect to \(x_i, x_j\) is

$$f''_{i,j}(x) = <f(x_{1},\dots,x_{i}+d,\dots,x_{j}+d,\dots,x_{n}) - 2 *f(x_{1},\dots,x_{n}) + $$

$$f(x_{1},\dots,x_{i}-d,\dots,x_{j}-d,\dots,x_{n})>/(2*d^2) - (f''_{i}(x) + f''_{j}(x))/2 $$

Richardson's extrapolation is based on these formula with the `d`

being reduced in the extrapolation iterations. In the code, `d`

is
scaled to accommodate parameters of different magnitudes.

`genD`

does `1 + r (N^2 + N)`

evaluations of the function
`f`

, where `N`

is the length of `x`

.

##### Value

A list with elements as follows:
`D`

is a matrix of first and second order partial
derivatives organized in the same manner as Bates and
Watts, the number of rows is equal to the length of the result of
`func`

, the first p columns are the Jacobian, and the
next p(p+1)/2 columns are the lower triangle of the second derivative
(which is the Hessian for a scalar valued `func`

).
`p`

is the length of `x`

(dimension of the parameter space).
`f0`

is the function value at the point where the matrix `D`

was calculated.
The `genD`

arguments `func`

, `x`

, `d`

, `method`

,
and `method.args`

also are returned in the list.

##### References

Linfield, G.R. and Penny, J.E.T. (1989) "Microcomputers in Numerical Analysis." Halsted Press.

Bates, D.M. & Watts, D. (1980), "Relative Curvature Measures of Nonlinearity." J. Royal Statistics Soc. series B, 42:1-25

Bates, D.M. and Watts, D. (1988) "Non-linear Regression Analysis and Its Applications." Wiley.

##### See Also

##### Examples

`library(numDeriv)`

```
# NOT RUN {
func <- function(x){c(x[1], x[1], x[2]^2)}
z <- genD(func, c(2,2,5))
# }
```

*Documentation reproduced from package numDeriv, version 2016.8-1, License: GPL-2*