lgkg(x, lambda = NULL, ul = as.vector(quantile(x, 0.1)),
sigmaul = 1, xil = 0, phiul = TRUE,
ur = as.vector(quantile(x, 0.9)), sigmaur = 1, xir = 0,
phiur = TRUE, log = TRUE)
nlgkg(pvector, x, phiul = TRUE, phiur = TRUE,
finitelik = FALSE)
nmean
, nsd
, u
,
sigmau
, xi
) or NULL
fkden
fgkg
.
They are designed to be used for MLE in
fgkg
but are available for
wider usage, e.g. constructing your own extreme value
mixture models.
See fkdengpd
,
fkden
and
fgpd
for full details.
Cross-validation likelihood is used for kernel density
component, but standard likelihood is used for GPD
components. The cross-validation likelihood for the KDE
is obtained by leaving each point out in turn, evaluating
the KDE at the point left out:
$$L(\lambda)\prod_{i=1}^{nb} \hat{f}_{-i}(x_i)$$ where
$$\hat{f}_{-i}(x_i) = \frac{1}{(n-1)\lambda}
\sum_{j=1: j\ne i}^{n} K(\frac{x_i - x_j}{\lambda})$$ is
the KDE obtained when the $i$th datapoint is dropped
out and then evaluated at that dropped datapoint at
$x_i$. Notice that the KDE sum is indexed over all
datapoints ($j=1, ..., n$, except datapoint $i$)
whether they are between the thresholds or in the tails.
But the likelihood product is evaluated only for those
data between the thresholds ($i=1, ..., n_b$). So the
$j = n_b+1, ..., n$ datapoint are extra kernel
centres from the data in the tails which are used in the
KDE but the likelihood is not evaluated there.
Log-likelihood calculations are carried out in
lgkg
, which takes bandwidth in
the same form as distribution functions. The negative
log-likelihood is a wrapper for
lgkg
, designed towards making
it useable for optimisation (e.g. parameters are given a
vector as first input).
The function lgkg
carries out
the calculations for the log-likelihood directly, which
can be exponentiated to give actual likelihood using
(log=FALSE
).gkg
,
kdengpd
,
kden
,
gpd
and
density
.
Other gkg: dgkg
, fgkg
,
gkg
, pgkg
,
qgkg
, rgkg