clue (version 0.3-56)

sumt: Sequential Unconstrained Minimization Technique

Description

Solve constrained optimization problems via the Sequential Unconstrained Minimization Technique (SUMT).

Usage

sumt(x0, L, P, grad_L = NULL, grad_P = NULL, method = NULL,
     eps = NULL, q = NULL, verbose = NULL, control = list())

Arguments

x0

a list of starting values, or a single starting value.

L

a function to minimize.

P

a non-negative penalty function such that \(P(x)\) is zero iff the constraints are satisfied.

grad_L

a function giving the gradient of L, or NULL (default).

grad_P

a function giving the gradient of P, or NULL (default).

method

a character string, or NULL. If not given, "CG" is used. If equal to "nlm", minimization is carried out using nlm. Otherwise, optim is used with method as the given method.

eps

the absolute convergence tolerance. The algorithm stops if the (maximum) distance between successive x values is less than eps.

Defaults to sqrt(.Machine$double.eps).

q

a double greater than one controlling the growth of the \(\rho_k\) as described in Details.

Defaults to 10.

verbose

a logical indicating whether to provide some output on minimization progress.

Defaults to getOption("verbose").

control

a list of control parameters to be passed to the minimization routine in case optim is used.

Value

A list inheriting from class "sumt", with components x, L, P, and rho giving the solution obtained, the value of the criterion and penalty function at x, and the final \(\rho\) value used in the augmented criterion function.

Details

The Sequential Unconstrained Minimization Technique is a heuristic for constrained optimization. To minimize a function \(L\) subject to constraints, one employs a non-negative function \(P\) penalizing violations of the constraints, such that \(P(x)\) is zero iff \(x\) satisfies the constraints. One iteratively minimizes \(L(x) + \rho_k P(x)\), where the \(\rho\) values are increased according to the rule \(\rho_{k+1} = q \rho_k\) for some constant \(q > 1\), until convergence is obtained in the sense that the Euclidean distance between successive solutions \(x_k\) and \(x_{k+1}\) is small enough. Note that the “solution” \(x\) obtained does not necessarily satisfy the constraints, i.e., has zero \(P(x)\). Note also that there is no guarantee that global (approximately) constrained optima are found. Standard practice would recommend to use the best solution found in “sufficiently many” replications of the algorithm.

The unconstrained minimizations are carried out by either optim or nlm, using analytic gradients if both grad_L and grad_P are given, and numeric ones otherwise.

If more than one starting value is given, the solution with the minimal augmented criterion function value is returned.

References

A. V. Fiacco and G. P. McCormick (1968). Nonlinear programming: Sequential unconstrained minimization techniques. New York: John Wiley & Sons.