optim
where the arguments are
compatible with maxNR
. Note that there is a
maxNR
-based BFGS implementation maxBFGSR
.maxBFGS(fn, grad = NULL, hess=NULL, start, fixed = NULL,
print.level = 0, iterlim = 200, constraints = NULL,
tol = 1e-08, reltol=tol,
finalHessian=TRUE,
parscale=rep(1, length=length(start)), ... )maxSANN(fn, grad = NULL, hess = NULL, start, fixed = NULL,
print.level = 0, iterlim = 10000, constraints = NULL,
tol = 1e-08, reltol=tol,
finalHessian=TRUE,
cand = NULL, temp = 10, tmax = 10,
parscale = rep(1, length = length(start)),
random.seed = 123, ... )
maxNM(fn, grad = NULL, hess = NULL, start, fixed = NULL,
print.level = 0, iterlim = 500, constraints = NULL,
tol = 1e-08, reltol=tol,
finalHessian=TRUE,
parscale = rep(1, length = length(start)),
alpha = 1, beta = 0.5, gamma = 2, ...)
fn
must return vector of
observation-specific likelihood values. Those are summed by maxNR
ifNULL
, numeric
gradient is used (only maxBFGS uses gradient). Gradient may return
a matrix, where columns correspond to the parameters and rows tmaxNR
.start
,
a numeric (index) vector indicating the positions of the fixed parameters,
or a vector of character striNULL
for unconstrained optimization
or a list with two components. The components may be either
eqA
and eqB
for equality-constrained optimization
$A \theta + B = 0$; or ineqA
and
FALSE
(not calculate), TRUE
(use analytic/numeric
Hessian) or "bhhh"
/"BHHH"
for information equality
approach. The latter approach is only suit"SANN"
algorithm
to generate a new candidate point;
if it is NULL
, a default Gaussian Markov kernel is used
(see argument gr
of optim
)optim
)optim
fn
and grad
.fn
at maximum.estimate
.estimate
evaluated at each observation (only if grad
returns a matrix
or grad
is not specified and fn
returns a vector).optim
).fn
and gr
, respectively.
This excludes those calls needed to
compute the Hessian, if requested, and any calls to fn
to compute a
finite-difference approximation to the gradient.NULL
if unconstrained). Includes the following components:
maxSANN
function
and restored at the end of this function
so that this function does not affect the generation of random numbers
although the random seed is set to argument random.seed
and the optim
, nlm
, maxNR
,
maxBHHH
, maxBFGSR
.# Maximum Likelihood estimation of the parameter of Poissonian distribution
n <- rpois(100, 3)
loglik <- function(l) n*log(l) - l - lfactorial(n)
# we use numeric gradient
summary(maxBFGS(loglik, start=1))
# you would probably prefer mean(n) instead of that ;-)
# Note also that maxLik is better suited for Maximum Likelihood
###
### Now an example of constrained optimization
###
f <- function(theta) {
x <- theta[1]
y <- theta[2]
exp(-(x^2 + y^2))
## Note: you may want to use exp(- theta \%*\% theta) instead ;-)
}
## use constraints: x + y >= 1
A <- matrix(c(1, 1), 1, 2)
B <- -1
res <- maxNM(f, start=c(1,1), constraints=list(ineqA=A, ineqB=B),
print.level=1)
print(summary(res))
Run the code above in your browser using DataLab