Learn R Programming

ihs (version 1.0)

ihsmle: Maximum Likelihood Estimation with the Inverse Hyperbolic Sine Distribution

Description

This function allows data to be fit to the inverse hyperbolic sine distribution using maximum likelihood estimation. This function uses the maxLik package to perform its estimations.

Usage

ihs.mle(X.f, mu.f = mu ~ mu, sigma.f = sigma ~ sigma, lambda.f = lambda ~ lambda, k.f = k ~ k, data = parent.frame(), start, subset, method = 'BFGS', constraints = NULL, follow.on = FALSE, iterlim = 5000, ...)

Arguments

X.f
A formula specifying the data, or the function of the data with parameters, that should be used in the maximisation procedure. X should be on the left-hand side and the right-hand side should be the data or function of the data that should be used.
mu.f, sigma.f, lambda.f, k.f
formulas including variables and parameters that specify the functional form of the parameters in the inverse hyperbolic sine log-likelihood function. mu, sigma, lambda, and k should be on the left-hand side of these formulas respectively.
data
an optional data frame in which to evaluate the variables in formula and weights. Can also be a list or an environment.
start
a named list or named numeric vector of starting estimates for every parameter.
subset
an optional vector specifying a subset of observations to be used in the fitting process.
method
A list of the methods to be used. May include "NR" (for Newton-Raphson), "BFGS" (for Broyden-Fletcher-Goldfarb-Shanno), "BHHH" (for Berndt-Hall-Hall-Hausman), "SANN" (for Simulated ANNealing), "CG" (for Conjugate Gradients), or "NM" (for Nelder-Mead). Lower-case letters (such as "nr" for Newton-Raphson) are allowed. The default method is the "BFGS" method.
constraints
either NULL for unconstrained optimization or a list with two components. The components may be either eqA and eqB for equality-constrained optimization $A %*% theta + B = 0$; or ineqA and ineqB for inequality constraints $A %*% theta + B > 0$. More than one row in ineqA and ineqB corresponds to more than one linear constraint, in that case all these must be zero (equality) or positive (inequality constraints).
follow.on
logical; if TRUE, and there are multiple methods, then the last set of parameters from one method is used as the starting set for the next.
iterlim
If provided as a vector of the same length as method, gives the maximum number of iterations or function values for the corresponding method. If a single number is provided, this will be used for all methods.
...
further arguments that are passed to the selected maximisation routine in the maxLik package. See below for a non-exhaustive list of some further arguments that can be used.

Value

If only one method is specified, ihs.mle returns a list of class "MLE". If multiple methods are given, ihs.mle returns a list of class "mult.MLE" with each component containing the results of each maximisation procedure. Each component is a list of class "MLE". A list of class "MLE" has the following components:
parameters
the names of the given parameters taken from start
maximum
fn value at maximum (the last calculated value if not converged).
estimate
estimated parameter value.
gradient
vector, last gradient value which was calculated. Should be close to 0 if normal convergence.
gradientObs
matrix of gradients at parameter value estimate evaluated at each observation (only if grad returns a matrix or grad is not specified and fn returns a vector).
hessian
Hessian at the maximum (the last calculated value if not converged).
code
return code:
  • 1 gradient close to zero (normal convergence).
  • 2 successive function values within tolerance limit (normal convergence).
  • 3 last step could not find higher value (probably not converged). This is related to line search step getting too small, usually because hitting the boundary of the parameter space. It may also be related to attempts to move to a wrong direction because of numerical errors. In some cases it can be helped by changing steptol.
  • 4 iteration limit exceeded.
  • 5 Infinite value.
  • 6 Infinite gradient.
  • 7 Infinite Hessian.
  • 8 Successive function values withing relative tolerance limit (normal convergence).
  • 9 (BFGS) Hessian approximation cannot be improved because of gradient did not change. May be related to numerical approximation problems or wrong analytic gradient.
  • 100 Initial value out of range.
message
a short message, describing code.
last.step
list describing the last unsuccessful step if code=3 with following components:
  • theta0 previous parameter value
  • f0 fn value at theta0
  • climb the movement vector to the maximum of the quadratic approximation
fixed
logical vector, which parameters are constants.
iterations
number of iterations.
type
character string, type of maximization.
constraints
A list, describing the constrained optimization (NULL if unconstrained). Includes the following components:
  • type type of constrained optimization
  • outer.iterations number of iterations in the constraints step
  • barrier.value value of the barrier function

Details

The parameter names are taken from start. If there is a name of a parameter or some data found on the right-hand side of one of the formulas but not found in data and not found in start, then an error is given. Below is a non-exhaustive list of further arguments that may be passed in to the ihs.mle function (see maxLik documentation for more details):
fixed
parameters that should be fixed at their starting values: a vector of character strings indicating the names of the fixed parameters (parameter names are taken from argument start). May not be used in BHHH algorithm.

print.level
a larger number prints more working information.

tol, reltol
the absolute and relative convergence tolerance (see optim). May not be used in BHHH algorithm.

finalHessian
how (and if) to calculate the final Hessian. Either FALSE (not calculate), TRUE (use analytic/numeric Hessian) or "bhhh"/"BHHH" for information equality approach.

parscale
A vector of scaling values for the parameters. Optimization is performed on 'par/parscale' and these should be comparable in the sense that a unit change in any element produces about a unit change in the scaled value. (see optim). May not be used in BHHH algorithm.

Note that not all arguments may be used for every maximisation algorithm at this time. If multiple methods are supplied (i.e. length(method) > 1), all arguments are employed for each method (except for iterlim, which is allowed to vary for different methods). If multiple methods are supplied, and some methods fail to initialise properly, a warning will be given. If every method fails to initialise, an error is given.

References

Henningsen, Arne and Toomet, Ott (2011). "maxLik: A package for maximum likelihood estimation in R" Computational Statistics 26(3), 443-458. DOI 10.1007/s00180-010-0217-1.

See Also

The maxLik package and its documentation. The ihs.mle simply uses its functions to maximize the inverse hyperbolic sine log-likelihood.

Examples

Run this code
### Showing how to fit a simple vector of data to the inverse 
### hyperbolic sine distribution. 
require(graphics)
require(stats)
set.seed(123456)
x = rnorm(100)
X.f = X ~ x
start = list(mu = 0, sigma = 2, lambda = 0, k = 1)
result = ihs.mle(X.f = X.f, start = start)
sumResult = summary(result)
print(result)
coef(result)
print(sumResult)

### Comparing the fit
xvals = seq(-5, 5, by = 0.05)
coefs = coef(result)
mu = coefs[1]
sigma = coefs[2]
lambda = coefs[3]
k = coefs[4]
plot(xvals, dnorm(xvals), type = "l", col = "blue")
lines(xvals, dihs(xvals, mu = mu, sigma = sigma, 
lambda = lambda, k = k), col = "red")

Run the code above in your browser using DataLab