maxNR(fn, grad = NULL, hess = NULL, theta, print.level = 0,
tol = 1e-06, gradtol = 1e-06, steptol = 1e-06, lambdatol = 1e-06,
qrtol = 1e-10,
iterlim = 15,
constPar = NULL, activePar=rep(TRUE, NParam), ...)
fn
must return vector of
observation-specific likelihood values. Those are summed by maxNR
if necessary. If the parameters are out of range, fn
NULL
, numeric
gradient is used. It must return a gradient vector, or matrix where
columns correspond to individual parameters. Note that this corresponds to
t(numericGradient(fn))
tol
, return
code=2
.gradtol
, return code=1
.step=1
, step is
divided by 2 and tried again. This is repeated until step <
steptol
, then code=3
is returned.-lambdatol
, a suitable diagonal matrix is subtracted from the
hessian (quadratic hill-climbing).iterlim
iterations, return code=4
.fn
, grad
and
hess
.fn
value at maximum (the last calculated value
if not converged).code
.code=3
with following components:
fn
value at theta0
}
One way is to put
constPar
to non-NULL. Second possibility is to signal by
fn
which parameters are constant and change the values of the
parameter vector. The value of fn
may have following attributes:
constVal
and newVal
is that the
latter parameters are not set to constants. If the attribute
newVal
is present, the new function value is allowed to be below
the previous one.
nlm
for Newton-Raphson optimisation,
optim
for different gradient-based optimisation
methods.## ML estimation of exponential duration model:
t <- rexp(100, 2)
loglik <- function(theta) sum(log(theta) - theta*t)
## Note the log-likelihood and gradient are summed over observations
gradlik <- function(theta) sum(1/theta - t)
hesslik <- function(theta) -100/theta^2
## Estimate with numeric gradient and hessian
a <- maxNR(loglik, theta=1, print.level=2)
summary(a)
## You would probably prefer 1/mean(t) instead ;-)
## Estimate with analytic gradient and hessian
a <- maxNR(loglik, gradlik, hesslik, theta=1)
summary(a)
Run the code above in your browser using DataLab