lkdengpdcon(x, lambda = NULL, u = 0, xi = 0, phiu = TRUE,
log = TRUE)
nlkdengpdcon(pvector, x, phiu = TRUE, finitelik = FALSE)
nmean
, nsd
, u
,
sigmau
, xi
) or NULL
lkdengpdcon
gives
cross-validation (log-)likelihood and
nlkdengpdcon
gives the
negative cross-validation log-likelihood.fkden
fkdengpdcon
.
They are designed to be used for MLE in
fkdengpdcon
but are
available for wider usage, e.g. constructing your own
extreme value mixture models.
See fkdengpd
,
fkden
and
fgpd
for full details.
Cross-validation likelihood is used for kernel density
component, but standard likelihood is used for GPD
component. The cross-validation likelihood for the KDE is
obtained by leaving each point out in turn, evaluating
the KDE at the point left out:
$$L(\lambda)\prod_{i=1}^{nb} \hat{f}_{-i}(x_i)$$ where
$$\hat{f}_{-i}(x_i) = \frac{1}{(n-1)\lambda}
\sum_{j=1: j\ne i}^{n} K(\frac{x_i - x_j}{\lambda})$$ is
the KDE obtained when the $i$th datapoint is dropped
out and then evaluated at that dropped datapoint at
$x_i$. Notice that the KDE sum is indexed over all
datapoints ($j=1, ..., n$, except datapoint $i$)
whether they are below the threshold or in the upper
tail. But the likelihood product is evaluated only for
those data below the threshold ($i=1, ..., n_b$). So
the $j = n_b+1, ..., n$ datapoints are extra kernel
centres from the data in the upper tails which are used
in the KDE but the likelihood is not evaluated there.
Log-likelihood calculations are carried out in
lkdengpdcon
, which takes
bandwidth in the same form as distribution functions. The
negative log-likelihood is a wrapper for
lkdengpdcon
, designed
towards making it useable for optimisation (e.g.
parameters are given a vector as first input).
The function lkdengpdcon
carries out the calculations for the log-likelihood
directly, which can be exponentiated to give actual
likelihood using (log=FALSE
).kdengpd
,
kden
,
gpd
and
density
Other kdengpdcon: dkdengpdcon
,
fkdengpdcon
, kdengpdcon
,
pkdengpdcon
, qkdengpdcon
,
rkdengpdcon