Rsolnp (version 1.16)

solnp: Nonlinear optimization using augmented Lagrange method.

Description

The solnp function is based on the solver by Yinyu Ye which solves the general nonlinear programming problem: $$\min f(x)$$ $$\mathrm{s.t.}$$ $$g(x) = 0$$ $$l_h \leq h(x) \leq u_h$$ $$l_x \leq x \leq u_x$$ where, $f(x)$, $g(x)$ and $h(x)$ are smooth functions.

Usage

solnp(pars, fun, eqfun = NULL, eqB = NULL, ineqfun = NULL, ineqLB = NULL, ineqUB = NULL, LB = NULL, UB = NULL, control = list(), ...)

Arguments

pars
The starting parameter vector.
fun
The main function which takes as first argument the parameter vector and returns a single value.
eqfun
(Optional) The equality constraint function returning the vector of evaluated equality constraints.
eqB
(Optional) The equality constraints.
ineqfun
(Optional) The inequality constraint function returning the vector of evaluated inequality constraints.
ineqLB
(Optional) The lower bound of the inequality constraints.
ineqUB
(Optional) The upper bound of the inequality constraints.
LB
(Optional) The lower bound on the parameters.
UB
(Optional) The upper bound on the parameters.
control
(Optional) The control list of optimization parameters. See below for details.
...
(Optional) Additional parameters passed to the main, equality or inequality functions. Note that the main and constraint functions must take the exact same arguments, irrespective of whether they are used by all of them.

Value

A list containing the following values:
pars
Optimal Parameters.
convergence
Indicates whether the solver has converged (0) or not (1 or 2).
values
Vector of function values during optimization with last one the value at the optimal.
lagrange
The vector of Lagrange multipliers.
hessian
The Hessian of the augmented problem at the optimal solution.
ineqx0
The estimated optimal inequality vector of slack variables used for transforming the inequality into an equality constraint.
nfuneval
The number of function evaluations.
elapsed
Time taken to compute solution.

Control

rho
This is used as a penalty weighting scaler for infeasibility in the augmented objective function. The higher its value the more the weighting to bring the solution into the feasible region (default 1). However, very high values might lead to numerical ill conditioning or significantly slow down convergence.
outer.iter
Maximum number of major (outer) iterations (default 400).
inner.iter
Maximum number of minor (inner) iterations (default 800).
delta
Relative step size in forward difference evaluation (default 1.0e-7).
tol
Relative tolerance on feasibility and optimality (default 1e-8).
trace
The value of the objective function and the parameters is printed at every major iteration (default 1).

Details

The solver belongs to the class of indirect solvers and implements the augmented Lagrange multiplier method with an SQP interior algorithm.

References

Y.Ye, Interior algorithms for linear, quadratic, and linearly constrained non linear programming, PhD Thesis, Department of EES Stanford University, Stanford CA.

Examples

Run this code
# From the original paper by Y.Ye
# see the unit tests for more....
#---------------------------------------------------------------------------------
# POWELL Problem
fn1=function(x)
{
	exp(x[1]*x[2]*x[3]*x[4]*x[5])
}

eqn1=function(x){
	z1=x[1]*x[1]+x[2]*x[2]+x[3]*x[3]+x[4]*x[4]+x[5]*x[5]
	z2=x[2]*x[3]-5*x[4]*x[5]
	z3=x[1]*x[1]*x[1]+x[2]*x[2]*x[2]
	return(c(z1,z2,z3))
}


x0 = c(-2, 2, 2, -1, -1)
powell=solnp(x0, fun = fn1, eqfun = eqn1, eqB = c(10, 0, -1))

Run the code above in your browser using DataLab