Learn R Programming

fda.usc (version 1.2.3)

CV.S: The cross-validation (CV) score

Description

The cross-validation (CV) score.

Usage

CV.S(y,S,W=NULL,trim=0,draw=FALSE,metric=metric.lp,...)

Arguments

y
Matrix of set cases with dimension (n x m), where n is the number of curves and m are the points observed in each curve.
S
Smoothing matrix, see S.NW, S.LLR or $S.KNN$.
W
Matrix of weights.
trim
The alpha of the trimming.
draw
=TRUE, draw the curves, the sample median and trimmed mean.
metric
Metric function, by default metric.lp.
...
Further arguments passed to or from other methods.

Value

res
Returns CV score calculated for input parameters.

Details

Compute the leave-one-out cross-validation score. A.-If trim=0: $$CV(h)=\frac{1}{n} \sum_{i=1}^{n}{\Bigg(\frac{y_i-r_{i}(x_i)}{(1-S_{ii})}\Bigg)^{2}w(x_{i})}$$ $Sii$ is the ith diagonal element of the smoothing matrix $S$.

B.-If trim>0: $$CV(h)=\frac{1}{l} \sum_{i=1}^{l}{\Bigg(\frac{y_i-r_{i}(x_i)}{(1-S_{ii})}\Bigg)^{2}w(x_{i})}$$ $Sii$ is the ith diagonal element of the smoothing matrix $S$ and l the index of (1-trim) curves with less error.

References

Wasserman, L. All of Nonparametric Statistics. Springer Texts in Statistics, 2006.

See Also

See Also as min.np Alternative method: GCV.S

Examples

Run this code

data(tecator)
x<-tecator$absorp.fdata
np<-ncol(x)
tt<-1:np
 S1 <- S.NW(tt,3,Ker.epa)
 S2 <- S.LLR(tt,3,Ker.epa)
 S3 <- S.NW(tt,5,Ker.epa)
 S4 <- S.LLR(tt,5,Ker.epa)
 cv1 <- CV.S(x, S1)
 cv2 <- CV.S(x, S2)
 cv3 <- CV.S(x, S3)
 cv4 <- CV.S(x, S4)
 cv5 <- CV.S(x, S4,trim=0.1,draw=TRUE)
 cv1;cv2;cv3;cv4;cv5
 S6 <- S.KNN(tt,3,Ker.unif)
 S7 <- S.KNN(tt,5,Ker.unif)
 cv6 <- CV.S(x, S6)
 cv7 <- CV.S(x, S7)
 cv6;cv7
 

Run the code above in your browser using DataLab