Learn R Programming

kedd (version 1.0.4)

h.tcv: Trimmed Cross-Validation for Bandwidth Selection

Description

The (S3) generic function h.tcv computes the trimmed cross-validation bandwidth selector of r'th derivative of kernel density estimator one-dimensional.

Usage

h.tcv(x, ...)
# S3 method for default
h.tcv(x, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, 
         tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "uniform", 
         "triangular", "triweight", "tricube", "biweight", "cosine"), ...)

Value

x

data points - same as input.

data.name

the deparsed name of the x argument.

n

the sample size after elimination of missing values.

kernel

name of kernel to use

deriv.order

the derivative order to use.

h

value of bandwidth parameter.

min.tcv

the minimal TCV value.

Arguments

x

vector of data values.

deriv.order

derivative order (scalar).

lower, upper

range over which to minimize. The default is almost always satisfactory. hos (Over-smoothing) is calculated internally from an kernel, see details.

tol

the convergence tolerance for optimize.

kernel

a character string giving the smoothing kernel to be used, with default "gaussian".

...

further arguments for (non-default) methods.

Author

Arsalane Chouaib Guidoum acguidoum@usthb.dz

Details

h.tcv trimmed cross-validation implements for choosing the bandwidth \(h\) of a r'th derivative kernel density estimator.

Feluch and Koronacki (1992) proposed a so-called trimmed cross-validation (TCV) in kernel density estimator, a simple modification of the unbiased (least-squares) cross-validation criterion. We consider the following "trimmed" version of "unbiased", to be minimized with respect to \(h\): $$\int \left(\hat{f}_{h}^{(r)}(x)\right)^{2} - 2 \frac{(-1)^{r}}{n(n-1) h^{2r+1}} \sum_{i=1}^{n}\sum_{j=1; j \neq i} K^{(2r)} \left(\frac{X_{j}-X_{i}}{h}\right)\chi\left(|X_{i}-X_{j}| > c_{n}\right)$$ where \(\chi(.)\) denotes the indicator function and \(c_{n}\) is a sequence of positive constants, \(c_{n}/ h^{2r+1} \rightarrow 0\) as \(n \rightarrow \infty\), and $$\int \left(\hat{f}_{h}^{(r)}(x)\right)^{2} = \frac{R\left(K^{(r)}\right)}{nh^{2r+1}} + \frac{(-1)^{r}}{n (n-1) h^{2r+1}} \sum_{i=1}^{n}\sum_{j=1;j \neq i}^{n} K^{(r)} \ast K^{(r)} \left(\frac{X_{j}-X_{i}}{h}\right)$$ the trimmed cross-validation function is defined by: $$TCV(h;r) = \frac{R\left(K^{(r)}\right)}{nh^{2r+1}} + \frac{(-1)^{r}}{n(n-1)h^{2r+1}}\sum_{i=1}^{n} \sum_{j=1;j \neq i}^{n} \varphi^{(r)} \left(\frac{X_{j}-X_{i}}{h}\right)$$ whit $$\varphi^{(r)}(c) = \left(K^{(r)} \ast K^{(r)} - 2 K^{(2r)} \chi\left(|c| > c_{n}/h^{2r+1}\right) \right)(c)$$ here we take \(c_{n} = 1/n\), for assure the convergence. Where \(K^{(r)} \ast K^{(r)} (x)\) is the convolution of the r'th derivative kernel function \(K^{(r)}(x)\) (see kernel.conv and kernel.fun).

The range over which to minimize is hos Oversmoothing bandwidth, the default is almost always satisfactory. See George and Scott (1985), George (1990), Scott (1992, pp 165), Wand and Jones (1995, pp 61).

References

Feluch, W. and Koronacki, J. (1992). A note on modified cross-validation in density estimation. Computational Statistics and Data Analysis, 13, 143--151.

See Also

plot.h.tcv.

Examples

Run this code
## Derivative order = 0

h.tcv(kurtotic,deriv.order = 0)

## Derivative order = 1

h.tcv(kurtotic,deriv.order = 1)

Run the code above in your browser using DataLab