subbolafit
returns the parameters, standard errors. negative
log-likelihood and covariance matrix of the (less) asymmetric power exponential
for a sample. The main difference from subboafit
is that
\(a_l = a_r = a\). The process can execute two steps, depending on the
level of accuracy required. See details below.
subbolafit(
data,
verb = 0L,
method = 2L,
interv_step = 10L,
provided_m_ = NULL,
par = as.numeric(c(2, 2, 1, 0)),
g_opt_par = as.numeric(c(0.1, 0.01, 100, 0.001, 1e-05, 2)),
itv_opt_par = as.numeric(c(0.01, 0.001, 200, 0.001, 1e-05, 2))
)
a list containing the following items:
"dt" - dataset containing parameters estimations and standard deviations.
"log-likelihood" - negative log-likelihood value.
(NumericVector) - the sample used to fit the distribution.
(int) - the level of verbosity. Select one of:
0 just the final result (default)
1 headings and summary table
2 intermediate steps results
3 intermediate steps internals
4+ details of optim. routine
int - the steps that should be used to estimate the parameters.
0 no optimization perform - just return the log-likelihood from initial guess.
1 global optimization not considering lack of smoothness in m
2 interval optimization taking non-smoothness in m into consideration (default, only occurs if provided_m_ is null)
int - the number of intervals to be explored after the last minimum was found in the interval optimization. Default is 10.
NumericVector - if NULL (default), the m parameter is estimated by the routine. If numeric, the estimation fixes m to the given value.
NumericVector - vector containing the initial guess for parameters bl, br, a and m, respectively. Default values of are c(2, 2, 1, 0).
NumericVector - vector containing the global optimization parameters. The optimization parameters are:
step - (num) initial step size of the searching algorithm.
tol - (num) line search tolerance.
iter - (int) maximum number of iterations.
eps - (num) gradient tolerance. The stopping criteria is \(||\text{gradient}||<\text{eps}\).
msize - (num) simplex max size. stopping criteria given by \(||\text{max edge}||<\text{msize}\)
algo - (int) algorithm. the optimization method used:
0 Fletcher-Reeves
1 Polak-Ribiere
2 Broyden-Fletcher-Goldfarb-Shanno
3 Steepest descent
4 Nelder-Mead simplex
5 Broyden-Fletcher-Goldfarb-Shanno ver.2
Details for each algorithm are available on the 'GSL' Manual. Default values are c(.1, 1e-2, 100, 1e-3, 1e-5, 2).
NumericVector - interval optimization parameters. Fields are the same as the ones for the global optimization. Default values are c(.01, 1e-3, 200, 1e-3, 1e-5, 2).
The LAPE is a exponential power distribution controlled by four parameters, with formula: $$ f(x;a,b_l,b_r,m) = \frac{1}{A} e^{- \frac{1}{b_l} |\frac{x-m}{a}|^{b_l} }, x < m $$ $$ f(x;a,b_l,b_r,m) = \frac{1}{A} e^{- \frac{1}{b_r} |\frac{x-m}{a}|^{b_r} }, x > m $$ with: $$A = ab_l^{1/b_l}\Gamma(1+1/b_l) + ab_r^{1/b_r}\Gamma(1+1/b_r)$$ where \(l\) and \(r\) represent left and right tails, \(a\) is a scale parameter, \(b*\) control the tails (lower values represent fatter tails), and \(m\) is a location parameter. Due to its lack of symmetry, and differently from the Subbotin, there is no simple equations available to use the method of moments, so we start directly by minimizing the negative log-likelihood. This global optimization is executed without restricting any parameters. If required (default), after the global optimization is finished, the method proceeds to iterate over the intervals between several two observations, iterating the same algorithm of the global optimization. The last method happens because of the lack of smoothness on the \(m\) parameter, and intervals must be used since the likelihood function doesn't have a derivative whenever \(m\) equals a sample observation. Due to the cost, these iterations are capped at most interv_step (default 10) from the last minimum observed. Details on the method are available on the package vignette.
sample_subbo <- rpower(1000, 1, 2)
subbolafit(sample_subbo)
Run the code above in your browser using DataLab