Allow the user to tune the characteristics of the gsl_nls
and gsl_nls_large
nonlinear least squares algorithms.
gsl_nls_control(
maxiter = 100,
scale = "more",
solver = "qr",
fdtype = "forward",
factor_up = 2,
factor_down = 3,
avmax = 0.75,
h_df = sqrt(.Machine$double.eps),
h_fvv = 0.02,
xtol = sqrt(.Machine$double.eps),
ftol = sqrt(.Machine$double.eps),
gtol = sqrt(.Machine$double.eps),
mstart_n = 30,
mstart_p = 5,
mstart_q = mstart_n%/%10,
mstart_r = 4,
mstart_s = 2,
mstart_tol = 0.25,
mstart_maxiter = 10,
mstart_maxstart = 250,
mstart_minsp = 1,
irls_maxiter = 50,
irls_xtol = .Machine$double.eps^0.25,
...
)
A list
with exactly twenty-three components:
maxiter
scale
solver
fdtype
factor_up
factor_down
avmax
h_df
h_fvv
xtol
ftol
gtol
mstart_n
mstart_p
mstart_q
mstart_r
mstart_s
mstart_tol
mstart_maxiter
mstart_maxstart
mstart_minsp
irls_maxiter
irls_xtol
with meanings as explained under 'Arguments'.
positive integer, termination occurs when the number of iterations reaches maxiter
.
character, scaling method or damping strategy determining the diagonal scaling matrix D. The following options are supported:
"more"
Moré rescaling (default). This method makes the problem scale-invariant and has
been proven effective on a large class of problems.
"levenberg"
Levenberg rescaling. This method has also proven effective on a large class of problems,
but is not scale-invariant. It may perform better for problems susceptible to parameter evaporation (parameters going to infinity).
"marquardt"
Marquardt rescaling. This method is scale-invariant, but it is generally
considered inferior to both the Levenberg and Moré strategies.
character, method used to solve the linear least squares system resulting as a subproblem in each iteration.
For large-scale problems fitted with gsl_nls_large
, the Cholesky solver ("cholesky"
) is always selected
and this parameter is not used. For least squares problems fitted with gsl_nls
the following choices are supported:
"qr"
QR decomposition of the Jacobian (default). This method will produce reliable solutions in cases
where the Jacobian is rank deficient or near-singular but does require more operations than the Cholesky method.
"cholesky"
Cholesky decomposition of the Jacobian. This method is faster than the QR approach, however
it is susceptible to numerical instabilities if the Jacobian matrix is rank deficient or near-singular.
"svd"
SVD decomposition of the Jacobian. This method will produce the most reliable solutions for
ill-conditioned Jacobians but is also the slowest.
character, method used to numerically approximate the Jacobian and/or second-order derivatives
when geodesic acceleration is used. Either "forward"
for forward finite differencing or "center"
for centered finite differencing. For least squares problems solved with gsl_nls_large
, numerical
approximation of the Jacobian matrix is not available and this parameter is only used to numerically approximate
the second-order derivatives (if geodesic acceleration is used).
numeric factor by which to increase the trust region radius when a search step is accepted. Too large values may destabilize the search, too small values slow down the search, defaults to 2.
numeric factor by which to decrease the trust region radius when a search step is rejected. Too large values may destabilize the search, too small values slow down the search, defaults to 3.
numeric value, the ratio of the acceleration term to the velocity term when using geodesic acceleration to
solve the nonlinear least squares problem. Any steps with a ratio larger than avmax
are rejected, defaults to 0.75.
For problems which experience difficulty converging, this threshold could be lowered.
numeric value, the step size for approximating the Jacobian matrix with finite differences, defaults to sqrt(.Machine$double.eps)
.
numeric value, the step size for approximating the second directional derivative when geodesic acceleration
is used to solve the nonlinear least squares problem, defaults to 0.02. This is only used if no analytic second
directional derivative (fvv
) is specified in gsl_nls
or gsl_nls_large
.
numeric value, termination occurs when the relative change in parameters between iterations is <= xtol
.
A general guideline for selecting the step tolerance is to choose xtol = 10^(-d)
where d
is the number of accurate
decimal digits desired in the parameters, defaults to sqrt(.Machine$double.eps)
.
numeric value, termination occurs when the relative change in sum of squared residuals between iterations is <= ftol
,
defaults to sqrt(.Machine$double.eps)
.
numeric value, termination occurs when the relative size of the gradient of the sum of squared residuals is <= gtol
,
indicating a local minimum, defaults to sqrt(.Machine$double.eps)
positive integer, number of quasi-random points drawn in each major iteration, parameter N
in Hickernell and Yuan (1997). Default is 30.
positive integer, number of iterations of inexpensive local search to concentrate the sample, parameter p
in Hickernell and Yuan (1997). Default is 5.
positive integer, number of points retained in the concentrated sample, parameter q
in Hickernell and Yuan (1997). Default is mstart_n %/% 10
..
positive integer, scaling factor of number of stationary points determining when the multi-start algorithm terminates, parameter r
in Hickernell and Yuan (1997). Default is 4.
If the starting ranges for one or more parameters are unbounded and updated dynamically, mstart_r
is multiplied by a factor 10 to avoid early termination.
positive integer, minimum number of iterations a point needs to be retained before starting an efficient local search, parameter s
in Hickernell and Yuan (1997). Default is 2.
numeric value, multiplicative tolerance (1 + mstart_tol)
used as criterion to start an efficient local search (epsilon in Algorithm 2.1, Hickernell and Yuan (1997)).
positive integer, maximum number of iterations in the efficient local search algorithm (Algorithm B, Hickernell and Yuan (1997)), defaults to 10.
positive integer, minimum number of major iterations (Algorithm 2.1, Hickernell and Yuan (1997)) before the multi-start algorithm terminates, defaults to 250.
positive integer, minimum number of detected stationary points before the multi-start algorithm terminates, defaults to 1.
positive integer, maximum number of IRLS iterations, defaults to 50. Only used in case of a non-default loss function (loss != "default"
) optimized by IRLS.
numeric value, termination of the IRLS procedure occurs when the relative change in parameters between IRLS iterations is <= irls_xtol
, defaults to .Machine$double.eps^(1/4)
.
Only used in case of a non-default loss function (loss != "default"
) optimized by IRLS.
any additional arguments (currently not used).
M. Galassi et al., GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078.
Hickernell, F.J. and Yuan, Y. (1997) “A simple multistart algorithm for global optimization”, OR Transactions, Vol. 1 (2).
## default tuning parameters
gsl_nls_control()
Run the code above in your browser using DataLab