nloptr (version 1.2.1)

stogo: Stochastic Global Optimization

Description

StoGO is a global optimization algorithm that works by systematically dividing the search space into smaller hyper-rectangles.

Usage

stogo(x0, fn, gr = NULL, lower = NULL, upper = NULL, maxeval = 10000,
  xtol_rel = 1e-06, randomized = FALSE, nl.info = FALSE, ...)

Arguments

x0

initial point for searching the optimum.

fn

objective function that is to be minimized.

gr

optional gradient of the objective function.

lower, upper

lower and upper bound constraints.

maxeval

maximum number of function evaluations.

xtol_rel

stopping criterion for relative change reached.

randomized

logical; shall a randomizing variant be used?

nl.info

logical; shall the original NLopt info been shown.

...

additional arguments passed to the function.

Value

List with components:

par

the optimal solution found so far.

value

the function value corresponding to par.

iter

number of (outer) iterations, see maxeval.

convergence

integer code indicating successful completion (> 0) or a possible error number (< 0).

message

character string produced by NLopt and giving additional information.

Details

StoGO is a global optimization algorithm that works by systematically dividing the search space (which must be bound-constrained) into smaller hyper-rectangles via a branch-and-bound technique, and searching them by a gradient-based local-search algorithm (a BFGS variant), optionally including some randomness.

References

S. Zertchaninov and K. Madsen, ``A C++ Programme for Global Optimization,'' IMM-REP-1998-04, Department of Mathematical Modelling, Technical University of Denmark.

Examples

Run this code
# NOT RUN {
### Rosenbrock Banana objective function
fn <- function(x)
    return( 100 * (x[2] - x[1] * x[1])^2 + (1 - x[1])^2 )

x0 <- c( -1.2, 1 )
lb <- c( -3, -3 )
ub <- c(  3,  3 )

stogo(x0 = x0, fn = fn, lower = lb, upper = ub)

# }

Run the code above in your browser using DataCamp Workspace