CV.S

The cross-validation (CV) score

Compute the leave-one-out cross-validation score.

Keywords
utilities
Usage
CV.S(y, S, W = NULL, trim = 0, draw = FALSE, metric = metric.lp, ...)
Arguments
y

Matrix of set cases with dimension (n x m), where n is the number of curves and m are the points observed in each curve.

S

Smoothing matrix, see S.NW, S.LLR or \(S.KNN\).

W

Matrix of weights.

trim

The alpha of the trimming.

draw

=TRUE, draw the curves, the sample median and trimmed mean.

metric

Metric function, by default metric.lp.

Further arguments passed to or from other methods.

Details

A.-If trim=0: $$CV(h)=\frac{1}{n} \sum_{i=1}^{n}{\Bigg(\frac{y_i-r_{i}(x_i)}{(1-S_{ii})}\Bigg)^{2}w(x_{i})}$$ \(S_{ii}\) is the ith diagonal element of the smoothing matrix \(S\).

B.-If trim>0: $$CV(h)=\frac{1}{l} \sum_{i=1}^{l}{\Bigg(\frac{y_i-r_{i}(x_i)}{(1-S_{ii})}\Bigg)^{2}w(x_{i})}$$ \(S_{ii}\) is the ith diagonal element of the smoothing matrix \(S\) and l the index of (1-trim) curves with less error.

Value

Returns CV score calculated for input parameters.

References

Wasserman, L. All of Nonparametric Statistics. Springer Texts in Statistics, 2006.

See Also

See Also as optim.np Alternative method: GCV.S

Aliases
  • CV.S
Examples
# NOT RUN {
data(tecator)
x<-tecator$absorp.fdata
np<-ncol(x)
tt<-1:np
S1 <- S.NW(tt,3,Ker.epa)
S2 <- S.LLR(tt,3,Ker.epa)
S3 <- S.NW(tt,5,Ker.epa)
S4 <- S.LLR(tt,5,Ker.epa)
cv1 <- CV.S(x, S1)
cv2 <- CV.S(x, S2)
cv3 <- CV.S(x, S3)
cv4 <- CV.S(x, S4)
cv5 <- CV.S(x, S4,trim=0.1,draw=TRUE)
cv1;cv2;cv3;cv4;cv5
S6 <- S.KNN(tt,1,Ker.unif,cv=TRUE)
S7 <- S.KNN(tt,5,Ker.unif,cv=TRUE)
cv6 <- CV.S(x, S6)
cv7 <- CV.S(x, S7)
cv6;cv7
# }
# NOT RUN {
 
# }
Documentation reproduced from package fda.usc, version 2.0.1, License: GPL-2

Community examples

Looks like there are no examples yet.