Learn R Programming

maxLik (version 1.3-2)

maxBFGS: BFGS, conjugate gradient, SANN and Nelder-Mead Maximization

Description

These functions are wrappers for optim, adding constrained optimization and fixed parameters.

Usage

maxBFGS(fn, grad=NULL, hess=NULL, start, fixed=NULL,
   control=NULL,
   constraints=NULL,
   finalHessian=TRUE,
   parscale=rep(1, length=length(start)),
   ... )

maxCG(fn, grad=NULL, hess=NULL, start, fixed=NULL, control=NULL, constraints=NULL, finalHessian=TRUE, parscale=rep(1, length=length(start)), ...)

maxSANN(fn, grad=NULL, hess=NULL, start, fixed=NULL, control=NULL, constraints=NULL, finalHessian=TRUE, parscale=rep(1, length=length(start)), ... )

maxNM(fn, grad=NULL, hess=NULL, start, fixed=NULL, control=NULL, constraints=NULL, finalHessian=TRUE, parscale=rep(1, length=length(start)), ...)

Arguments

fn
function to be maximised. Must have the parameter vector as the first argument. In order to use numeric gradient and BHHH method, fn must return a vector of observation-specific likelihood values. Those are summed internally wh
grad
gradient of fn. Must have the parameter vector as the first argument. If NULL, numeric gradient is used (maxNM and maxSANN do not use gradient). Gradient may return a matrix, wher
hess
Hessian of fn. Not used by any of these methods, included for compatibility with maxNR.
start
initial values for the parameters. If start values are named, those names are also carried over to the results.
fixed
parameters to be treated as constants at their start values. If present, it is treated as an index vector of start parameters.
control
list of control parameters or a MaxControl object. If it is a list, the default values are used for the parameters that are left unspecified by the user. These functions accept the following parameters: [object Object],[o
constraints
either NULL for unconstrained optimization or a list with two components. The components may be either eqA and eqB for equality-constrained optimization $A \theta + B = 0$; or ineqA and
finalHessian
how (and if) to calculate the final Hessian. Either FALSE (not calculate), TRUE (use analytic/numeric Hessian) or "bhhh"/"BHHH" for information equality approach. The latter approach is only suit
parscale
A vector of scaling values for the parameters. Optimization is performed on 'par/parscale' and these should be comparable in the sense that a unit change in any element produces about a unit change in the scaled value. (see
...
further arguments for fn and grad.

Value

  • Object of class "maxim":
  • maximumvalue of fn at maximum.
  • estimatebest parameter vector found.
  • gradientvector, gradient at parameter value estimate.
  • gradientObsmatrix of gradients at parameter value estimate evaluated at each observation (only if grad returns a matrix or grad is not specified and fn returns a vector).
  • hessianvalue of Hessian at optimum.
  • codeinteger. Success code, 0 is success (see optim).
  • messagecharacter string giving any additional information returned by the optimizer, or NULL.
  • fixedlogical vector indicating which parameters are treated as constants.
  • iterationstwo-element integer vector giving the number of calls to fn and gr, respectively. This excludes those calls needed to compute the Hessian, if requested, and any calls to fn to compute a finite-difference approximation to the gradient.
  • typecharacter string "BFGS maximisation".
  • constraintsA list, describing the constrained optimization (NULL if unconstrained). Includes the following components: [object Object],[object Object],[object Object]
  • controlthe optimization control parameters in the form of a MaxControl object.

Details

In order to provide a consistent interface, all these functions also accept arguments that other optimizers use. For instance, maxNM accepts the grad argument despite being a gradient-less method. The state (or seed) of R's random number generator is saved at the beginning of the maxSANN function and restored at the end of this function so this function does not affect the generation of random numbers although the random seed is set to argument random.seed and the SANN algorithm uses random numbers.

References

Nelder, J. A. & Mead, R. A, Simplex Method for Function Minimization, The Computer Journal, 1965, 7, 308-313

See Also

optim, nlm, maxNR, maxBHHH, maxBFGSR for a maxNR-based BFGS implementation.

Examples

Run this code
# Maximum Likelihood estimation of Poissonian distribution
n <- rpois(100, 3)
loglik <- function(l) n*log(l) - l - lfactorial(n)
# we use numeric gradient
summary(maxBFGS(loglik, start=1))
# you would probably prefer mean(n) instead of that ;-)
# Note also that maxLik is better suited for Maximum Likelihood
###
### Now an example of constrained optimization
###
f <- function(theta) {
  x <- theta[1]
  y <- theta[2]
  exp(-(x^2 + y^2))
  ## you may want to use exp(- theta \%*\% theta) instead
}
## use constraints: x + y >= 1
A <- matrix(c(1, 1), 1, 2)
B <- -1
res <- maxNM(f, start=c(1,1), constraints=list(ineqA=A, ineqB=B),
control=list(printLevel=1))
print(summary(res))

Run the code above in your browser using DataLab