Attempts to find the best g
and b
parameters which are consistent
with the first and second moments of the supplied data.
mommb(x, maxit = 100L, tol = .Machine$double.eps ^ 0.5, na.rm = TRUE)
Returns a list containing:
The fitted g
parameter.
The fitted b
parameter.
The number of iterations used.
The squared error between the empirical mean and the
theoretical mean given the fitted g
and b
.
numeric; vector of observations between 0 and 1.
integer; maximum number of iterations.
numeric; tolerance. If too tight, algorithm may fail.
Defaults to the square root of .Machine$double.eps
or roughly
\(1.49\times 10^{-8}\).
logical; if TRUE
(default) NA
s are
removed. If FALSE
, and there are NA
s, the algorithm will stop
with an error.
Avraham Adler Avraham.Adler@gmail.com
The algorithm is based on sections 4.1 and 4.2 of Bernegger (1997). With rare exceptions, the fitted \(g\) and \(b\) parameters must conform to: $$\mu = \frac{\ln(gb)(1-b)}{\ln(b)(1-gb)}$$
subject to:
$$\mu^2 \le E[x^2]\le\mu\\ p\le E[x^2]$$
where \(\mu\) and \(\mu^2\) are the “true” first and second moments and \(E[x^2]\) is the empirical second moment.
The algorithm starts with the estimate \(p = E[x^2]\) as an upper bound. However, in step 2 of section 4.2, the \(p\) component is estimated as the difference between the numerical integration of \(x^2 f(x)\) and the empirical second moment---\(p = E[x^2] - \int x^2 f(x) dx\)---as seen in equation (4.3). This is converted to \(g\) by reciprocation and convergence is tested by the difference between this new \(g\) and its prior value. If the new \(p \le 0\), the algorithm attempts to restart with a larger \(g\)---a smaller \(p\). In this case, the algorithm tends to fail to converge.
Bernegger, S. (1997) The Swiss Re Exposure Curves and the MBBEFD Distribution Class. ASTIN Bulletin 27(1), 99--111. tools:::Rd_expr_doi("10.2143/AST.27.1.563208")
rmb
for random variate generation.
set.seed(85L)
x <- rmb(1000, 25, 4)
mommb(x)
Run the code above in your browser using DataLab