Learn R Programming

maxLik (version 1.3-2)

sumt: Equality-constrained optimization

Description

Sequentially Unconstrained Maximization Technique (SUMT) based optimization for linear equality constraints.

This implementation is primarily intended to be called from other maximization routines, such as maxNR.

Usage

sumt(fn, grad=NULL, hess=NULL,
start,
maxRoutine, constraints,
SUMTTol = sqrt(.Machine$double.eps),
SUMTPenaltyTol = sqrt(.Machine$double.eps),
SUMTQ = 10,
SUMTRho0 = NULL,
printLevel=print.level, print.level = 0, SUMTMaxIter = 100, ...)

Arguments

fn
function of a (single) vector parameter. The function may have more arguments (passed by ...), but those are not treated as the parameter.
grad
gradient function of fn. NULL if missing
hess
function, Hessian of the fn. NULL if missing
start
numeric, initial value of the parameter
maxRoutine
maximization algorithm, such as maxNR
constraints
list, information for constrained maximization. Currently two components are supported: eqA and eqB for linear equality constraints: $A \beta + B = 0$. The user must ensure that the matrices A and
SUMTTol
stopping condition. If the estimates at successive outer iterations are close enough, i.e. maximum of the absolute value over the component difference is smaller than SUMTTol, the algorithm stops.

Note this does not necessarily mean that

SUMTPenaltyTol
stopping condition. If the barrier value (also called penalty) $(A \beta + B)'(A \beta + B)$ is less than SUMTTol, the algorithm stops
SUMTQ
a double greater than one, controlling the growth of the rho as described in Details. Defaults to 10.
SUMTRho0
Initial value for rho. If not specified, a (possibly) suitable value is selected. See Details.

One should consider supplying SUMTRho0 in case where the unconstrained problem does not have a maximum, or the maxi

printLevel
Integer, debugging information. Larger number prints more details.
print.level
same as printLevel, for backward compatibility
SUMTMaxIter
Maximum SUMT iterations
...
Other arguments to maxRoutine and fn.

Value

  • Object of class 'maxim'. In addition, a component
  • constraintsA list, describing the constrained optimization. Includes the following components: [object Object],[object Object],[object Object],[object Object],[object Object]

Note

In case of equality constraints, it may be more efficient to enclose the function in a wrapper function. The wrapper calculates full set of parameters based on a smaller set of parameters, and the constraints.

Details

The Sequential Unconstrained Minimization Technique is a heuristic for constrained optimization. To minimize a function $f$ subject to constraints, it uses a non-negative penalty function $P$, such that $P(x)$ is zero iff $x$ satisfies the constraints. One iteratively minimizes $f(x) + \varrho_k P(x)$, where the $\varrho$ values are increased according to the rule $\varrho_{k+1} = q \varrho_k$ for some constant $q > 1$, until convergence is achieved in the sense that the barrier value $P(x)'P(x)$ is close to zero. Note that there is no guarantee that the global constrained optimum is found. Standard practice recommends to use the best solution found in sufficiently many replications.

Any of the maximization algorithms in the maxLik, such as maxNR, can be used for the unconstrained step.

Analytic gradient and hessian are used if provided.

See Also

sumt in package clue.

Examples

Run this code
## We maximize exp(-x^2 - y^2) where x+y = 1
hatf <- function(theta) {
   x <- theta[1]
   y <- theta[2]
   exp(-(x^2 + y^2))
   ## Note: you may prefer exp(- theta \%*\% theta) instead
}
## use constraints: x + y = 1
A <- matrix(c(1, 1), 1, 2)
B <- -1
res <- sumt(hatf, start=c(0,0), maxRoutine=maxNR,
            constraints=list(eqA=A, eqB=B))
print(summary(res))

Run the code above in your browser using DataLab