lsqnonlin
solves nonlinear least-squares problems, including
nonlinear data-fitting problems, through the Levenberg-Marquardt approach. lsqnonneg
solve nonnegative least-squares constraints problem.
lsqnonlin(fun, x0, options = list(), ...)lsqnonneg(C, d)
C x - d
will be
minimized with x >= 0
.x
: the point with least sum of squares value.ssq
: the sum of squares.ng
: norm of last gradient.nh
: norm of last step used.mu
: damping parameter of Levenberg-Marquardt.neval
: number of function evaluations.errno
: error number, corresponds to error message.errmess
: error message, i.e. reason for stopping.lsqnonlin
computes the sum-of-squares of the vector-valued function
fun
, that is if $f(x) = (f_1(x), \ldots ,f_n(x))$ then
x=lsqnonlin(fun,x0)
starts at point x0
and finds a minimum
of the sum of squares of the functions described in fun. fun
shall
return a vector of values and not the sum of squares of the values.
(The algorithm implicitly sums and squares fun(x).)
options
is a list with the following components and defaults:
tau
: used in starting value for Marquardt parameter.tolx
: stopping parameter for step length.tolg
: stopping parameter for gradient.maxeval
the maximum number of function evaluations.tau
are from 1e-6...1e-3...1
with small
values for good starting points and larger values for not so good or known
bad starting points. lsqnonneg
solves the linear least-squares problem C x - d
,
x
nonnegative, transforming it with the `trick' x --> exp(x)
into a nonlinear one and solving it applying lsqnonlin
.
Fletcher, R., (1971). A Modified Marquardt Subroutine for Nonlinear Least Squares. Report AERE-R 6799, Harwell.
nlm
, nls
## Rosenberg function as least-squares problem
x0 <- c(0, 0)
fun <- function(x) c(10*(x[2]-x[1]^2), 1-x[1])
lsqnonlin(fun, x0)
## Example from R-help
y <- c(5.5199668, 1.5234525, 3.3557000, 6.7211704, 7.4237955, 1.9703127,
4.3939336, -1.4380091, 3.2650180, 3.5760906, 0.2947972, 1.0569417)
x <- c(1, 0, 0, 4, 3, 5, 12, 10, 12, 100, 100, 100)
# Define target function as difference
f <- function(b)
b[1] * (exp((b[2] - x)/b[3]) * (1/b[3]))/(1 + exp((b[2] - x)/b[3]))^2 - y
x0 <- c(21.16322, 8.83669, 2.957765)
lsqnonlin(f, x0) # ssq 50.50144 at c(36.133144, 2.572373, 1.079811)
# nls() will break down
# nls(Y ~ a*(exp((b-X)/c)*(1/c))/(1 + exp((b-X)/c))^2,
# start=list(a=21.16322, b=8.83669, c=2.957765), algorithm = "plinear")
# Error: step factor 0.000488281 reduced below 'minFactor' of 0.000976563
## Least-squares data fitting
# Define fun(p, x)
lsqcurvefit <- function(fun, p0, xdata, ydata) {
fn <- function(p, x) fun(p, xdata) - ydata
lsqnonlin(fn, p0)
}
## Lanczos1 data (artificial data)
# f(x) = 0.0951*exp(-x) + 0.8607*exp(-3*x) + 1.5576*exp(-5*x)
x <- linspace(0, 1.15, 24)
y <- c(2.51340000, 2.04433337, 1.66840444, 1.36641802, 1.12323249, 0.92688972,
0.76793386, 0.63887755, 0.53378353, 0.44793636, 0.37758479, 0.31973932,
0.27201308, 0.23249655, 0.19965895, 0.17227041, 0.14934057, 0.13007002,
0.11381193, 0.10004156, 0.08833209, 0.07833544, 0.06976694, 0.06239313)
p0 <- c(1.2, 0.3, 5.6, 5.5, 6.5, 7.6)
fp <- function(p, x) p[1]*exp(-p[2]*x) + p[3]*exp(-p[4]*x) + p[5]*exp(-p[6]*x)
lsqcurvefit(fp, p0, x, y)
## Example for lsqnonneg()
C <- matrix(c(0.0372, 0.2868,
0.6861, 0.7071,
0.6233, 0.6245,
0.6344, 0.6170), nrow = 4, ncol = 2, byrow = TRUE)
d <- c(0.8587, 0.1781, 0.0747, 0.8405)
sol <- lsqnonneg(C, d)
cbind(qr.solve(C, d), sol$x)
# -2.563884 5.515869e-08
# 3.111911 6.929003e-01
Run the code above in your browser using DataLab