# constrOptim

0th

Percentile

##### Linearly Constrained Optimization

Minimise a function subject to linear inequality constraints using an adaptive barrier algorithm.

Keywords
optimize
##### Usage
constrOptim(theta, f, grad, ui, ci, mu = 1e-04, control = list(),
outer.iterations = 100, outer.eps = 1e-05, …,
hessian = FALSE)
##### Arguments
theta

numeric (vector) starting value (of length $p$): must be in the feasible region.

f

function to minimise (see below).

gradient of f (a function as well), or NULL (see below).

ui

constraint matrix ($k \times p$), see below.

ci

constraint vector of length $k$ (see below).

mu

(Small) tuning parameter.

control, method, hessian

passed to optim.

outer.iterations

iterations of the barrier algorithm.

outer.eps

non-negative number; the relative convergence tolerance of the barrier algorithm.

Other named arguments to be passed to f and grad: needs to be passed through optim so should not match its argument names.

##### Details

The feasible region is defined by ui %*% theta - ci >= 0. The starting value must be in the interior of the feasible region, but the minimum may be on the boundary.

A logarithmic barrier is added to enforce the constraints and then optim is called. The barrier function is chosen so that the objective function should decrease at each outer iteration. Minima in the interior of the feasible region are typically found quite quickly, but a substantial number of outer iterations may be needed for a minimum on the boundary.

The tuning parameter mu multiplies the barrier term. Its precise value is often relatively unimportant. As mu increases the augmented objective function becomes closer to the original objective function but also less smooth near the boundary of the feasible region.

Any optim method that permits infinite values for the objective function may be used (currently all but "L-BFGS-B").

##### References

K. Lange Numerical Analysis for Statisticians. Springer 2001, p185ff

optim, especially method = "L-BFGS-B" which does box-constrained optimisation.
library(stats) # NOT RUN { ## from optim fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } grr <- function(x) { ## Gradient of 'fr' x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } optim(c(-1.2,1), fr, grr) #Box-constraint, optimum on the boundary constrOptim(c(-1.2,0.9), fr, grr, ui = rbind(c(-1,0), c(0,-1)), ci = c(-1,-1)) # x <= 0.9, y - x > 0.1 constrOptim(c(.5,0), fr, grr, ui = rbind(c(-1,0), c(1,-1)), ci = c(-0.9,0.1)) ## Solves linear and quadratic programming problems ## but needs a feasible starting value # # from example(solve.QP) in 'quadprog' # no derivative fQP <- function(b) {-sum(c(0,5,0)*b)+0.5*sum(b*b)} Amat <- matrix(c(-4,-3,0,2,1,0,0,-2,1), 3, 3) bvec <- c(-8, 2, 0) constrOptim(c(2,-1,-1), fQP, NULL, ui = t(Amat), ci = bvec) # derivative gQP <- function(b) {-c(0, 5, 0) + b} constrOptim(c(2,-1,-1), fQP, gQP, ui = t(Amat), ci = bvec) ## Now with maximisation instead of minimisation hQP <- function(b) {sum(c(0,5,0)*b)-0.5*sum(b*b)} constrOptim(c(2,-1,-1), hQP, NULL, ui = t(Amat), ci = bvec, control = list(fnscale = -1)) # }