Function minimization using the Gnu Scientific Library, reference
manual section 35. These functions are declared in header file
Several algorithms for finding (local) minima of functions in one or
more variables are provided. All of the algorithms operate locally,
in the sense that they maintain a best guess and require the function
to be continuous. Apart from the Nelder-Mead algorithm, these
algorithms also use a derivative.
multimin(..., prec=0.0001) multimin.init(x, f, df=NA, fdf=NA, method=NA, step.size=NA, tol=NA) multimin.iterate(state) multimin.restart(state) multimin.fminimizer.size(state)
- In function
multimin(), the argument list passed to
- A starting point. These algorithms are faster with better initial guesses
- The function to minimize. This function must take a single
numericvector as input, and output a
- The derivative of
f. This is required for all algorithms except Nelder-Mead
- A function that evaluates
dfsimultaneously. This is optional, and is only useful if simultaneous evaluation is faster
- The algorithm to use, which is one of
- This step size guides the algorithm to pick a good distance between points in its search
- This parameter is relevant for gradient-based methods. It controls how much the gradient should flatten out in each line search. More specifically, let $u(t) = f(x + st)$ be the function restricted to the search ray. Then a point $t$ is
- The stopping-rule precision parameter. For the derivative-based
methods, a solution is good enough if the norm of the gradient is smaller
prec. For the non-derivative-based methods, a solution is good enough if the norm of
- This stores all information relating to the progress of the optimization problem
There are two ways to call
multimin. The simple way is to
multimin directly. A more complicated way is to
multimin.init first, and then repeatedly call
multimin.iterate until the guess gets good enough. In
multimin.restart can be used with the second approach
to discard accumulated information (such as curvature information) if
that information turns out to be unhelpful. This is roughly
equivalent to calling
multimin.init by setting the starting
point to be the current best guess.
All of the derivative-based methods consist of iterations that pick a descent direction, and conduct a line search for a better point along the ray in that direction from the current point. The Fletcher-Reeves and Polak-Ribiere conjugate gradient algorithms maintain a a vector that summarizes the curvature at that point. These are useful for high-dimensional problems (eg: more than 100 dimensions) because they don't use matrices which become expensive to keep track of. The Broyden-Fletcher-Goldfarb-Shanno is better for low-dimensional problems, since it maintains an approximation of the Hessian of the function as well, which gives better curvature information. The steepest-descent algorithm is a naive algorithm that does not use any curvature information. The Nelder-Mead algorithm which does not use derivatives.
- All of these functions return a state variable, which consists of the following items:
internal.state Bureaucratic stuff for communicating with GSL x The current best guess of the optimal solution f The value of the function at the best guess df The derivative of the function at the best guess is.fdf TRUE if the algorithm is using a derivative code The GSL return code from the last iteration
The source code for the functions documented here conditionalizes
WIN32; under windows there is a slight memory leak.
nlm are the standard optimization functions in
D are the standard symbolic differentation
functions in R.
Ryacas provides more extensive differentiation
support using Yet Another Computer Algebra System.
numericDeriv is the standard numerical differentation function
in R. GSL can also do numerical differentiation, but no-one has
written an R interface yet.
multimin requires the objective function to have a single
relist are useful for
converting between more convenient forms.
# The Rosenbrock function: x0 <- c(-1.2, 1) f <- function(x) (1 - x)^2 + 100 * (x - x^2)^2 df <- function(x) c(-2*(1 - x) + 100 * 2 * (x - x^2) * (-2*x), 100 * 2 * (x - x^2)) # The simple way to call multimin. state <- multimin(x0, f, df) print(state$x) # The fine-control way to call multimin. state <- multimin.init(x0, f, df, method="conjugate-fr") for (i in 1:200) state <- multimin.iterate(state) print(state$x)