stats (version 3.6.2)

optimize: One Dimensional Optimization

Description

The function optimize searches the interval from lower to upper for a minimum or maximum of the function f with respect to its first argument.

optimise is an alias for optimize.

Usage

optimize(f, interval, …, lower = min(interval), upper = max(interval),
         maximum = FALSE,
         tol = .Machine$double.eps^0.25)
optimise(f, interval, …, lower = min(interval), upper = max(interval),
         maximum = FALSE,
         tol = .Machine$double.eps^0.25)

Arguments

f

the function to be optimized. The function is either minimized or maximized over its first argument depending on the value of maximum.

interval

a vector containing the end-points of the interval to be searched for the minimum.

additional named or unnamed arguments to be passed to f.

lower

the lower end point of the interval to be searched.

upper

the upper end point of the interval to be searched.

maximum

logical. Should we maximize or minimize (the default)?

tol

the desired accuracy.

Value

A list with components minimum (or maximum) and objective which give the location of the minimum (or maximum) and the value of the function at that point.

Details

Note that arguments after must be matched exactly.

The method used is a combination of golden section search and successive parabolic interpolation, and was designed for use with continuous functions. Convergence is never much slower than that for a Fibonacci search. If f has a continuous second derivative which is positive at the minimum (which is not at lower or upper), then convergence is superlinear, and usually of the order of about 1.324.

The function f is never evaluated at two points closer together than \(\epsilon\)\( |x_0| + (tol/3)\), where \(\epsilon\) is approximately sqrt(.Machine$double.eps) and \(x_0\) is the final abscissa optimize()$minimum. If f is a unimodal function and the computed values of f are always unimodal when separated by at least \(\epsilon\) \( |x| + (tol/3)\), then \(x_0\) approximates the abscissa of the global minimum of f on the interval lower,upper with an error less than \(\epsilon\)\( |x_0|+ tol\). If f is not unimodal, then optimize() may approximate a local, but perhaps non-global, minimum to the same accuracy.

The first evaluation of f is always at \(x_1 = a + (1-\phi)(b-a)\) where (a,b) = (lower, upper) and \(\phi = (\sqrt 5 - 1)/2 = 0.61803..\) is the golden section ratio. Almost always, the second evaluation is at \(x_2 = a + \phi(b-a)\). Note that a local minimum inside \([x_1,x_2]\) will be found as solution, even when f is constant in there, see the last example.

f will be called as f(x, ...) for a numeric value of x.

The argument passed to f has special semantics and used to be shared between calls. The function should not copy it.

References

Brent, R. (1973) Algorithms for Minimization without Derivatives. Englewood Cliffs N.J.: Prentice-Hall.

See Also

nlm, uniroot.

Examples

Run this code
# NOT RUN {
require(graphics)

f <- function (x, a) (x - a)^2
xmin <- optimize(f, c(0, 1), tol = 0.0001, a = 1/3)
xmin

## See where the function is evaluated:
optimize(function(x) x^2*(print(x)-1), lower = 0, upper = 10)

## "wrong" solution with unlucky interval and piecewise constant f():
f  <- function(x) ifelse(x > -1, ifelse(x < 4, exp(-1/abs(x - 1)), 10), 10)
fp <- function(x) { print(x); f(x) }

plot(f, -2,5, ylim = 0:1, col = 2)
optimize(fp, c(-4, 20))   # doesn't see the minimum
optimize(fp, c(-7, 20))   # ok
# }

Run the code above in your browser using DataCamp Workspace