lbckdengpd(x, lambda = NULL, u = 0, sigmau = 1, xi = 0,
phiu = TRUE, bcmethod = "simple", proper = TRUE,
nn = "jf96", offset = 0, xmax = Inf, log = TRUE)NULL (default)lbckdengpd gives
cross-validation (log-)likelihood and
nlbckdengpd gives the
negative cross-validation log-likelihood.fkdenfbckdengpd.
They are designed to be used for MLE in
fbckdengpd but are
available for wider usage, e.g. constructing your own
extreme value mixture models.
See fbckden,
fkden and
fgpd for full details.
Cross-validation likelihood is used for boundary
corrected kernel density component, but standard
likelihood is used for GPD component. The
cross-validation likelihood for the KDE is obtained by
leaving each point out in turn, evaluating the KDE at the
point left out: $$L(\lambda)\prod_{i=1}^{nb}
\hat{f}_{-i}(x_i)$$ where $$\hat{f}_{-i}(x_i) =
\frac{1}{(n-1)\lambda} \sum_{j=1: j\ne i}^{n} K(\frac{x_i
- x_j}{\lambda})$$ is the boundary corrected KDE obtained
when the $i$th datapoint is dropped out and then
evaluated at that dropped datapoint at $x_i$. Notice
that the coundary corrected KDE sum is indexed over all
datapoints ($j=1, ..., n$, except datapoint $i$)
whether they are below the threshold or in the upper
tail. But the likelihood product is evaluated only for
those data below the threshold ($i=1, ..., n_b$). So
the $j = n_b+1, ..., n$ datapoints are extra kernel
centres from the data in the upper tails which are used
in the boundary corrected KDE but the likelihood is not
evaluated there.
Log-likelihood calculations are carried out in
lbckdengpd, which takes
bandwidth in the same form as distribution functions. The
negative log-likelihood is a wrapper for
lbckdengpd, designed
towards making it useable for optimisation (e.g.
parameters are given a vector as first input).
The function lbckdengpd
carries out the calculations for the log-likelihood
directly, which can be exponentiated to give actual
likelihood using (log=FALSE).bckden,
kden,
gpd and
density