h.mlcv computes the maximum
likelihood cross-validation (Kullback-Leibler information)
bandwidth selector of a one-dimensional kernel density estimate.h.mlcv(x, ...)
## S3 method for class 'default':
h.mlcv(x, lower = 0.1, upper = 5, tol = 0.1 * lower,
kernel = c("gaussian", "epanechnikov", "uniform", "triangular",
"triweight", "tricube", "biweight", "cosine"), ...)optimize."gaussian".x argument.h.mlcv maximum-likelihood cross-validation implements for choosing
the optimal bandwidth $h$ of kernel density estimator.
This method was proposed by Habbema, Hermans, and Van den Broeck (1971) and by Duin (1976). The maximum-likelihood
cross-validation (MLCV) function is defined by:
$$MLCV(h) = n^{-1} \sum_{i=1}^{n} \log\left[\hat{f}_{h,i}(x)\right]$$
the estimate $\hat{f}_{h,i}(x)$ on the subset ${X_{j}}_{j \neq i}$
denoting the leave-one-out estimator, can be written:
$$\hat{f}_{h,i}(X_{i}) = \frac{1}{(n-1) h} \sum_{j \neq i} K \left(\frac{X_{j}-X_{i}}{h}\right)$$
Define that $h_{mlcv}$ as good which approaches the finite maximum of $MLCV(h)$:
$$h_{mlcv} = \arg \max_{h} MLCV(h) = \arg \max_{h} \left(n^{-1} \sum_{i=1}^{n} \log\left[\sum_{j \neq i} K \left(\frac{X_{j}-X_{i}}{h}\right)\right]-\log[(n-1)h]\right)$$plot.h.mlcv, see lcv in package h.mlcv(bimodal)
h.mlcv(bimodal, kernel ="epanechnikov")Run the code above in your browser using DataLab