Learn R Programming

maxLik (version 0.8-0)

sumt: Equality-constrained optimization

Description

Sequentially Unconstrained Maximization Technique (SUMT) based optimization for linear equality constraints.

This implementation is mostly intended to be called from other maximization routines, such as maxNR.

Usage

sumt(fn, grad=NULL, hess=NULL,
start,
maxRoutine, constraints,
SUMTTol = sqrt(.Machine$double.eps),
SUMTPenaltyTol = sqrt(.Machine$double.eps),
SUMTQ = 10,
SUMTRho0 = NULL,
print.level = 0, SUMTMaxIter = 100, ...)

Arguments

fn
function of a (single) vector parameter. The function may have more arguments, but those are not treated as parameter
grad
gradient function of fn. NULL if missing
hess
hessian matrix of the fn. NULL if missing
start
initial value of the parameter.
maxRoutine
maximization algorithm
constraints
list, information for constrained maximization. Currently two components are supported: eqA and eqB for linear equality constraints: $A \beta + B = 0$. The user must ensure that the matrices A and
SUMTTol
stopping condition. If the coefficient of successive outer iterations are close enough, i.e. maximum of the absolute value over the component difference is smaller than SUMTTol, the algorithm stops.

Note this does not necessarily mean sa

SUMTPenaltyTol
stopping condition. If barrier value (also called penalty) $(A \beta + B)'(A \beta + B)$ is less than SUMTTol, the algorithm stops
SUMTQ
a double greater than one controlling the growth of the rho as described in Details. Defaults to 10.
SUMTRho0
Initial value for rho. If not specified, a (possibly) suitable value is selected. See Details.
print.level
Integer, debugging information. Larger number print more details.
SUMTMaxIter
Maximum SUMT iterations
...
Other arguments to maxRoutine and fn.

Value

  • Object of class 'maxim'. In addition, a component
  • constraintsA list, describing the constrained optimization. Includes the following components:
    • type
    {type of constrained optimization} barrier.value{value of the penalty function at maximum} code{code for the stopping condition} message{a short message, describing the stopping condition} outer.iterations{number of iterations in the SUMT step}

Note

It may be a lot more efficient to embrace the actual function to be optimized to an outer function, which calculates the actual parameters based on a smaller set of parameters and the constraints.

Details

The Sequential Unconstrained Minimization Technique is a heuristic for constrained optimization. To minimize a function $f$ subject to constraints, one employs a non-negative function $P$ penalizing violations of the constraints, such that $P(x)$ is zero iff $x$ satisfies the constraints. One iteratively minimizes $L(x) + \varrho_k P(x)$, where the $\varrho$ values are increased according to the rule $\varrho_{k+1} = q \varrho_k$ for some constant $q > 1$, until convergence is obtained in the sense that the barrier value $P(x)'P(x)$ is close to zero. Note that there is no guarantee that global (approximately) constrained optima are found. Standard practice would recommend to use the best solution found in "sufficiently many" replications of the algorithm.

The unconstrained minimizations are carried out by either any of the maximization algorithms in the maxLik, such as maxNR. Analytic gradient and hessian are used if provided, numeric ones otherwise.

See Also

sumt