Last chance! 50% off unlimited learning
Sale ends in
nloptr is an R interface to NLopt, a free/open-source library for nonlinear optimization started by Steven G. Johnson, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms. The NLopt library is available under the GNU Lesser General Public License (LGPL), and the copyrights are owned by a variety of authors. Most of the information here has been taken from the NLopt website, where more details are available.
nloptr(
x0,
eval_f,
eval_grad_f = NULL,
lb = NULL,
ub = NULL,
eval_g_ineq = NULL,
eval_jac_g_ineq = NULL,
eval_g_eq = NULL,
eval_jac_g_eq = NULL,
opts = list(),
...
)
The return value contains a list with the inputs, and additional elements
the call that was made to solve
integer value with the status of the optimization (0 is success)
more informative message with the status of the optimization
number of iterations that were executed
value if the objective function in the solution
optimal value of the controls
version of NLopt that was used
vector with starting values for the optimization.
function that returns the value of the objective function. It can also return gradient information at the same time in a list with elements "objective" and "gradient" (see below for an example).
function that returns the value of the gradient of the objective function. Not all of the algorithms require a gradient.
vector with lower bounds of the controls (use -Inf
for
controls without lower bound), by default there are no lower bounds for any
of the controls.
vector with upper bounds of the controls (use Inf
for
controls without upper bound), by default there are no upper bounds for any
of the controls.
function to evaluate (non-)linear inequality constraints that should hold in the solution. It can also return gradient information at the same time in a list with elements "constraints" and "jacobian" (see below for an example).
function to evaluate the Jacobian of the (non-)linear inequality constraints that should hold in the solution.
function to evaluate (non-)linear equality constraints that should hold in the solution. It can also return gradient information at the same time in a list with elements "constraints" and "jacobian" (see below for an example).
function to evaluate the Jacobian of the (non-)linear equality constraints that should hold in the solution.
list with options. The option "algorithm
" is required.
Check the
NLopt
website for a full list of available algorithms. Other options control the
termination conditions (minf_max, ftol_rel, ftol_abs, xtol_rel,
xtol_abs, maxeval, maxtime
). Default is xtol_rel
= 1e-4. More
information
here. #nolint
A full description of all options is shown by the function
nloptr.print.options()
.
Some algorithms with equality constraints require the option
local_opts
, which contains a list with an algorithm and a termination
condition for the local algorithm. See ?`nloptr-package`
for an
example.
The option print_level
controls how much output is shown during the
optimization process. Possible values:
0 (default) | no output |
1 | show iteration number and value of objective function |
2 | 1 + show value of (in)equalities |
3 | 2 + show value of controls |
The option check_derivatives
(default = FALSE
) can be used to
run to compare the analytic gradients with finite difference approximations.
The option check_derivatives_print
('all'
(default),
'errors'
, 'none'
) controls the output of the derivative
checker, if it is run, showing all comparisons, only those that resulted in
an error, or none. The option check_derivatives_tol
(default =
1e-04), determines when a difference between an analytic gradient and its
finite difference approximation is flagged as an error.
arguments that will be passed to the user-defined objective and constraints functions.
Steven G. Johnson and others (C code)
Jelmer Ypma (R interface)
NLopt addresses general nonlinear optimization problems of the form:
where -Inf
or Inf
. One may also optionally have
Steven G. Johnson, The NLopt nonlinear-optimization package, https://github.com/stevengj/nlopt
nloptr.print.options
check.derivatives
optim
nlm
nlminb
Rsolnp::Rsolnp
Rsolnp::solnp
library('nloptr')
## Rosenbrock Banana function and gradient in separate functions
eval_f <- function(x) {
return(100 * (x[2] - x[1] * x[1])^2 + (1 - x[1])^2)
}
eval_grad_f <- function(x) {
return(c(-400 * x[1] * (x[2] - x[1] * x[1]) - 2 * (1 - x[1]),
200 * (x[2] - x[1] * x[1])))
}
# initial values
x0 <- c(-1.2, 1)
opts <- list("algorithm"="NLOPT_LD_LBFGS",
"xtol_rel"=1.0e-8)
# solve Rosenbrock Banana function
res <- nloptr(x0=x0,
eval_f=eval_f,
eval_grad_f=eval_grad_f,
opts=opts)
print(res)
## Rosenbrock Banana function and gradient in one function
# this can be used to economize on calculations
eval_f_list <- function(x) {
return(
list(
"objective" = 100 * (x[2] - x[1] * x[1]) ^ 2 + (1 - x[1]) ^ 2,
"gradient" = c(-400 * x[1] * (x[2] - x[1] * x[1]) - 2 * (1 - x[1]),
200 * (x[2] - x[1] * x[1]))))
}
# solve Rosenbrock Banana function using an objective function that
# returns a list with the objective value and its gradient
res <- nloptr(x0=x0,
eval_f=eval_f_list,
opts=opts)
print(res)
# Example showing how to solve the problem from the NLopt tutorial.
#
# min sqrt(x2)
# s.t. x2 >= 0
# x2 >= (a1*x1 + b1)^3
# x2 >= (a2*x1 + b2)^3
# where
# a1 = 2, b1 = 0, a2 = -1, b2 = 1
#
# re-formulate constraints to be of form g(x) <= 0
# (a1*x1 + b1)^3 - x2 <= 0
# (a2*x1 + b2)^3 - x2 <= 0
library('nloptr')
# objective function
eval_f0 <- function(x, a, b) {
return(sqrt(x[2]))
}
# constraint function
eval_g0 <- function(x, a, b) {
return((a*x[1] + b)^3 - x[2])
}
# gradient of objective function
eval_grad_f0 <- function(x, a, b) {
return(c(0, .5/sqrt(x[2])))
}
# Jacobian of constraint
eval_jac_g0 <- function(x, a, b) {
return(rbind(c(3*a[1]*(a[1]*x[1] + b[1])^2, -1.0),
c(3*a[2]*(a[2]*x[1] + b[2])^2, -1.0)))
}
# functions with gradients in objective and constraint function
# this can be useful if the same calculations are needed for
# the function value and the gradient
eval_f1 <- function(x, a, b) {
return(list("objective"=sqrt(x[2]),
"gradient"=c(0,.5/sqrt(x[2]))))
}
eval_g1 <- function(x, a, b) {
return(list("constraints"=(a*x[1] + b)^3 - x[2],
"jacobian"=rbind(c(3*a[1]*(a[1]*x[1] + b[1])^2, -1.0),
c(3*a[2]*(a[2]*x[1] + b[2])^2, -1.0))))
}
# define parameters
a <- c(2,-1)
b <- c(0, 1)
# Solve using NLOPT_LD_MMA with gradient information supplied in separate
# function.
res0 <- nloptr(x0=c(1.234,5.678),
eval_f=eval_f0,
eval_grad_f=eval_grad_f0,
lb = c(-Inf,0),
ub = c(Inf,Inf),
eval_g_ineq = eval_g0,
eval_jac_g_ineq = eval_jac_g0,
opts = list("algorithm"="NLOPT_LD_MMA"),
a = a,
b = b)
print(res0)
# Solve using NLOPT_LN_COBYLA without gradient information
res1 <- nloptr(x0=c(1.234,5.678),
eval_f=eval_f0,
lb = c(-Inf, 0),
ub = c(Inf, Inf),
eval_g_ineq = eval_g0,
opts = list("algorithm" = "NLOPT_LN_COBYLA"),
a = a,
b = b)
print(res1)
# Solve using NLOPT_LD_MMA with gradient information in objective function
res2 <- nloptr(x0=c(1.234, 5.678),
eval_f=eval_f1,
lb = c(-Inf, 0),
ub = c(Inf, Inf),
eval_g_ineq = eval_g1,
opts = list("algorithm"="NLOPT_LD_MMA",
"check_derivatives" = TRUE),
a = a,
b = b)
print(res2)
Run the code above in your browser using DataLab