nloptr (version 1.0.0)

mma: Method of Moving Asymptotes

Description

Globally-convergent method-of-moving-asymptotes (MMA) algorithm for gradient-based local optimization, including nonlinear inequality constraints (but not equality constraints).

Usage

mma(x0, fn, gr = NULL, lower = NULL, upper = NULL,
        hin = NULL, hinjac = NULL,
        nl.info = FALSE, control = list(), ...)

Arguments

x0
starting point for searching the optimum.
fn
objective function that is to be minimized.
gr
gradient of function fn; will be calculated numerically if not specified.
lower, upper
lower and upper bound constraints.
hin
function defining the inequality constraints, that is hin>=0 for all components.
hinjac
Jacobian of function hin; will be calculated numerically if not specified.
nl.info
logical; shall the original NLopt info been shown.
control
list of options, see nl.opts for help.
...
additional arguments passed to the function.

Value

  • List with components:
  • parthe optimal solution found so far.
  • valuethe function value corresponding to par.
  • iternumber of (outer) iterations, see maxeval.
  • convergenceinteger code indicating successful completion (> 1) or a possible error number (< 0).
  • messagecharacter string produced by NLopt and giving additional information.

Details

This is an improved CCSA ("conservative convex separable approximation") variant of the original MMA algorithm published by Svanberg in 1987, which has become popular for topology optimization. Note:

References

Krister Svanberg, ``A class of globally convergent optimization methods based on conservative convex separable approximations,'' SIAM J. Optim. 12 (2), p. 555-573 (2002).

See Also

slsqp

Examples

Run this code
##  Solve the Hock-Schittkowski problem no. 100 with analytic gradients
x0.hs100 <- c(1, 2, 0, 4, 0, 1, 1)
fn.hs100 <- function(x) {
    (x[1]-10)^2 + 5*(x[2]-12)^2 + x[3]^4 + 3*(x[4]-11)^2 + 10*x[5]^6 +
                  7*x[6]^2 + x[7]^4 - 4*x[6]*x[7] - 10*x[6] - 8*x[7]
}
hin.hs100 <- function(x) {
    h <- numeric(4)
    h[1] <- 127 - 2*x[1]^2 - 3*x[2]^4 - x[3] - 4*x[4]^2 - 5*x[5]
    h[2] <- 282 - 7*x[1] - 3*x[2] - 10*x[3]^2 - x[4] + x[5]
    h[3] <- 196 - 23*x[1] - x[2]^2 - 6*x[6]^2 + 8*x[7]
    h[4] <- -4*x[1]^2 - x[2]^2 + 3*x[1]*x[2] -2*x[3]^2 - 5*x[6]	+11*x[7]
    return(h)
}
gr.hs100 <- function(x) {
   c(  2 * x[1] -  20,
      10 * x[2] - 120,
       4 * x[3]^3,
       6 * x[4] - 66,
      60 * x[5]^5,
      14 * x[6]   - 4 * x[7] - 10,
       4 * x[7]^3 - 4 * x[6] -  8 )}
hinjac.hs100 <- function(x) {
    matrix(c(4*x[1], 12*x[2]^3, 1, 8*x[4], 5, 0, 0,
        7, 3, 20*x[3], 1, -1, 0, 0,
        23, 2*x[2], 0, 0, 0, 12*x[6], -8,
        8*x[1]-3*x[2], 2*x[2]-3*x[1], 4*x[3], 0, 0, 5, -11), 4, 7, byrow=TRUE)
}

# incorrect result with exact jacobian
S <- mma(x0.hs100, fn.hs100, gr = gr.hs100,
            hin = hin.hs100, hinjac = hinjac.hs100,
            nl.info = TRUE, control = list(xtol_rel = 1e-8))

# correct result with inexact jacobian
S <- mma(x0.hs100, fn.hs100, hin = hin.hs100,
            nl.info = TRUE, control = list(xtol_rel = 1e-8))

Run the code above in your browser using DataLab