Learn R Programming

vectorialcalculus (version 1.0.5)

lagrange_check: Optimality check with Lagrange multipliers and bordered Hessian

Description

Evaluates first- and second-order optimality conditions for a smooth constrained optimization problem with equality constraints at a given candidate point. The function checks the Lagrange conditions, builds the bordered Hessian, and classifies the candidate as a minimum, maximum or indeterminate/saddle according to the signs of the leading principal minors.

Usage

lagrange_check(f, g, x, h = NULL, tol = 1e-06)

Value

A list with components:

  • ok_stationarity: numeric value of the norm of the stationarity residual. When constraints are present, this measures how close the gradient of f is to the linear combination given by the Jacobian transpose and the Lagrange multipliers.

  • ok_feasible: maximum absolute value of the constraint vector g(x) at the candidate point.

  • lambda: numeric vector of length m with the Lagrange multipliers.

  • J: Jacobian matrix of the constraints at x, with dimension m x n.

  • H_f: Hessian matrix of the objective function at x, of size n x n.

  • H_g: list of Hessian matrices corresponding to each constraint function, each of size n x n.

  • H_L: Hessian matrix of the Lagrangian at x.

  • B: bordered Hessian matrix, of size (m + n) x (m + n) when constraints are present.

  • minors: data.frame with one row per leading principal minor. It contains the order of the minor, its signed value and the logarithm of the absolute determinant used in the computation.

  • clasificacion: character value equal to "minimo", "maximo" or "indeterminado", according to the bordered Hessian criterion.

  • notas: character vector with diagnostic messages about the rank of the Jacobian, near singularity of matrices or any numerical issues detected.

Arguments

f

Objective function. It must be function(x) and return a single numeric value. The argument x is a numeric vector of length n.

g

Equality constraints. Either a single function function(x) returning a numeric vector of length m, or a list of scalar functions function(x), one per constraint.

x

Numeric vector giving the candidate point at which the optimality conditions are evaluated.

h

Step size for finite differences. It can be:

  • a single numeric value used for all coordinates,

  • a numeric vector of length n with one step per coordinate,

  • or NULL, in which case step sizes are chosen as 1e-4 * (1 + abs(x)) componentwise.

tol

Numeric tolerance used to judge feasibility of the constraints, the effective rank of the Jacobian, near singularity of matrices and very small principal minors.

Details

Consider a problem of minimizing or maximizing a scalar function f(x) subject to m equality constraints collected in g(x) = 0, where x is a vector in R^n and g(x) is a vector in R^m.

At the candidate point x, the function:

  • Approximates the gradient of f and the gradients of each constraint using second-order central finite differences.

  • Builds the Jacobian matrix J of the constraints (rows are gradients of each constraint).

  • Approximates the Hessian matrix of f and the Hessian of each constraint, also by central finite differences.

  • Forms the Hessian of the Lagrangian by combining the Hessian of f and the Hessians of the constraints with the Lagrange multipliers.

  • Builds the bordered Hessian matrix using the Jacobian and the Hessian of the Lagrangian.

  • Computes leading principal minors of the bordered Hessian and uses their signs to classify the candidate.

The classification is based on the standard bordered Hessian test: after multiplying each leading principal minor by (-1)^m, if all resulting values are positive the point is classified as a minimum; if their signs alternate (negative, positive, negative, and so on), the point is classified as a maximum. In any other case, the result is reported as indeterminate. All derivatives (gradients and Hessians) are computed numerically with central finite differences of second order. The step sizes can be given explicitly or chosen automatically from the scale of the point x.

Examples

Run this code
## 1) Minimum with one constraint:
##    f(x,y) = x^2 + y^2,  g(x,y) = x + y - 1 = 0  -> (0.5, 0.5)
f1 <- function(x) x[1]^2 + x[2]^2
g1 <- function(x) x[1] + x[2] - 1
lagrange_check(f1, g1, x = c(0.5, 0.5))

## 2) Maximum with one constraint:
##    f(x,y) = -(x^2 + y^2),  g(x,y) = x + y - 1 = 0  -> (0.5, 0.5)
f2 <- function(x) -(x[1]^2 + x[2]^2)
lagrange_check(f2, g1, x = c(0.5, 0.5))

## 3) Two constraints in R^3 (minimum norm with two planes)
f3 <- function(x) sum(x^2)
g3 <- list(
  function(x) x[1] + x[2] + x[3] - 1,
  function(x) x[1] - x[3]
)
## Candidate solution: x1 = x3, 2*x1 + x2 = 1  ->  x = (1/3, 1/3, 1/3)
lagrange_check(f3, g3, x = c(1, 1, 1) / 3)

Run the code above in your browser using DataLab