Learn R Programming

laGP (version 1.1-2)

optim.auglag: Optimize a linear objective function under multiple blackbox constraints

Description

Uses a surrogate modeled augmented Lagrangian (AL) system to optimize a known linear function under unknown multiple (blackbox) constraints via expected improvement (EI) and variations

Usage

optim.auglag(fn, B, start = 10, end = 100, Xstart = NULL, 
    ab = c(3/2, 4), lambda = rep(1, ncol(B)), rho = 1/2, 
    urate = 10, ncandf = function(t) { t }, dg.start = c(0.1, 1e-06), 
    dlim = sqrt(ncol(B)) * c(1/100, 10), obj.norm = 1, 
    tol = list(ei = 1e-05, ey = 0.05, its = 10), nomax = FALSE, 
    N = 1000, plotprog = FALSE, verb = 2, ...)

Arguments

fn
function of a single, vector-valued, input (x) returning a list with elements "obj" containing the (scalar) objective value and "c" returning a vector of evaluations of the (multiple) constrain
B
2-column matrix describing the bounding box. The number of rows of the matrix determines the input dimension (length(x) in fn(x)); the first column gives lower bounds and the second gives u
start
positive integer giving the number of random starting locations before sequential design (for optimization) is performed; start >= 6 is recommended unless Xstart is non-NULL; in the current version the s
end
positive integer giving the total number of evaluations/trials in the optimization; must have end > start
Xstart
optional matrix of starting design locations in lieu of, or in addition to, start random ones; recommend nrow(Xstart) + start >= 6; also must have ncol(Xstart) = nrow(B)
ab
prior parameters; see darg describing the prior used on the lengthscale parameter during emulation(s) for the constraints
lambda
m-dimensional initial Lagrangian parameter for m-constraints
rho
positive scalar initial quadratic penalty parameter in the augmented Lagrangian
urate
positive integer indicating how many optimization trials should pass before each MLE/MAP update is performed for estimators for GP correlation lengthscale parameter(s)
ncandf
function taking a single integer indicating the optimization trial number t, where start < t <= end<="" code="">, and returning the number of search candidates (e.g., for expected improvement calculations) at round t; th
dg.start
2-vector giving starting values for the lengthscale and nugget parameters of the GP surrogate model(s) for constraints
dlim
2-vector giving bounds for the lengthscale parameter(s) under MLE/MAP inference, thereby augmenting the prior specification in ab
obj.norm
scalar indicating the relationship between the sum of the inputs sum(x) to fn and the output fn(x)$obj$; note that at this time only linear objectives are supported by the code - more details below
tol
list containing entries "ei", "ey", and "its" together describing the search method and the criteria used to determine approximate convergence of the inner loop of the augmented Lagrangian search
nomax
one of c{0,1,2} indicating if the max should be removed from the augmented lagrangian (AL): not at all (0), in the evaluation of EI or EY (1), or also in the update of lambda (2); see the description for details
N
positive scalar integer indicating the number of Monte Carlo samples to be used for calculating EI and EY
plotprog
logical indicating if progress plots should be made after each inner iteration; the plots show three panels tracking the best valid objective, the EI or EY surface over the first two input variables (requires
verb
positive scalar integer indicating the verbosity level; the larger the value the more that is printed to the screen
...
additional arguments passed to fn

Value

  • The output is a list summarizing the progress of the evaluations of the blackbox under optimization
  • progvector giving the best valid (c(x) < 0) value of the objective over the trials
  • objvector giving the value of the objective for the input under consideration at each trial
  • Xmatrix giving the input calues at which the blackbox function was evaluated
  • Cmatrix giving the value of the constraint function for the input under consideration at each trial (corresponding to X above)
  • dmatrix of lengthscale values obtained at each update of the GP estimator(s), i.e., every urate iterations

Details

In its current form, this is an alpha code illustrating the suite of methods used to optimize two challenging constrained optimization problems from Gramacy, et al. (2014); see references below.

That scheme hybridizes Gaussian process based surrogate modeling and expected improvement (EI; Jones, et., al, 2008) with the additive penalty method (APM) implemented by the augmented Lagrangian (AL, e.g., Kannan and Wild, 2012). The goal is to minimize a a known linear objective function f(x) under multiple, unknown (blackbox) constraint functions satisfying c(x) <= 0<="" code="">, which is vector-valued. The solution here emulates the components of c with Gaussian process surrogates, and guides optimization by searching the posterior mean surface, or the EI of, the following composite objective $$Y(x) = f(x) + \lambda^\top Y_c(x) + \frac{1}{2\rho} \sum_{i=1}^m \max(0, Y_{c_i}(x))^2,$$ where $\lambda$ and $\rho$ follow updating equations that guarantee convergence to a valid solution minimizing the objective. For more details, see Gramacy, et al. (2014).

The nomax argument indicates whether or not the max is present in the AL formula above. Setting nomax > 0 can lead to a more aggressive search nearby the boundary between feasible and infeasible regions. See Gramacy, et al. (2014) for more details.

The example below illustrates a variation on the toy problem considered in that paper, which bases sampling on EI.

References

Gramacy, R.B, Gray, G.A, Lee, H.K.H, Le Digabel, S., Ranjan P., Wells, G., Wild, S.M. (2014) Modeling an Augmented Lagrangian for Improved Blackbox Constrained Optimization, Preprint available on arXiv:1403.4890; http://arxiv.org/abs/1403.4890

Jones, D., Schonlau, M., and Welch, W. J. (1998). Efficient Global Optimization of Expensive Black Box Functions. Journal of Global Optimization, 13, 455-492.

Kannan, A. and Wild, S. (2012). Benefits of Deeper Analysis in Simulation-based Groundwater Optimization Problems. In Proceedings of the XIX International Conference on Computational Methods in Water Resources (CMWR 2012).

See Also

optim.step.tgp for unconstrained optimization; optim with method="L-BFGS-B" for box constraints, or optim with method="SANN" for simulated annealing

Examples

Run this code
library(tgp) ## uses dopt.gp, will change later
library(akima) ## for plotprog=interp below

## a test function returning objective evaluations and constraints
aimprob <- function(x)
{
  f <- sum(x)
  c1 <- 1.5-x[1]-2*x[2]-0.5*sin(2*pi*(x[1]^2-2*x[2]))
  c2 <- x[1]^2+x[2]^2-1.5
  return(list(obj=f, c=c(c1,c2)))
}

## set bounding rectangle for adaptive sampling
B <- matrix(c(rep(0,2),rep(1,2)),ncol=2)

## optimization (primarily) by EI, change 25 to 100 for
## 99% chance of finding the global optimum with value 0.6
library(akima)
out <- optim.auglag(aimprob, B, ab=c(3/2,8), end=25, plotprog=interp)


## for comparison, here is a version that uses simulated annealing
## where the Additive Penalty Method (APM) is used to handle
## the constraints
aimprob.apm <- function(x, B=matrix(c(rep(0,2),rep(1,2)),ncol=2))
{ 
  ## check bounding box
  for(i in 1:length(x)) {
    if(x[i] < B[i,1] || x[i] > B[i,2]) return(Inf)
  }

  ## evaluate objective and constraints
  f <- sum(x)
  c1 <- 1.5-x[1]-2*x[2]-0.5*sin(2*pi*(x[1]^2-2*x[2]))
  c2 <- x[1]^2+x[2]^2-1.5

  ## return APM composite
  return(f + abs(c1) + abs(c2))
}

## use SA; specify control=list(maxit=100), say, to control max 
## number of iterations; does not easily facilitate plotting progress
out <- optim(runif(2), aimprob.apm, method="SANN") 
## check the final value, which typically does not satisfy both
## constraints
aimprob(out$par)

Run the code above in your browser using DataLab