fregre.pc.cv(fdataobj, y, kmax=8, lambda = 0, P = c(1, 0, 0),
criteria = "SIC",weights=rep(1,len=n),...)
fdata
class object.n
.lambda=TRUE
the algorithm computes a sequence of lambda values.P=c(1,0,0)
, ridge regresion is computed and if P=c(0,0,1)
, penalized regression is computed penalizing the second derivative (curvature).fregre.pc
or fregre.pls
pc.opt
) components.pc.opt
components.kmax
components.pc.opt
from the first kmax
PC and (optionally) the best penalized parameter lambda.opt
from a sequence of non-negative numbers lambda
.
If kmax
is a integer (by default and recomended) the procedure is as follows (see example 1):
kmax
byfregre.pc
.pc.order [2]
) between the(kmax-1)
byfregre.pc
and calculate the criteria value of the two principal components.kmax
principal component (pc.order[kmax]).lambda
value.pc.opt
=pc.order[1:k.min]
) and (optionally) the lambda parameter with minimum MSC criteria.kmax
is a sequence of integer the procedure is as follows (see example 2):
fregre.pc
in each step.lambda
value.pc.opt
=pc.order[1:k.min]
) and (optionally) the lambda parameter with minimum MSC criteria.
pc.opt
and ridge parameter rn.opt
.
The criteria selection is done by cross-validation (CV) or Model Selection Criteria (MSC).
criteria
=``CV''criteria
=``SIC'' (by default)
$p_n=\frac{log(n)}{n-k_n-2}$,criteria
=``SICc''
$p_n=2$,criteria
=``AIC''
$p_n=\frac{2n}{n-k_n-2}$,criteria
=``AICc''
$p_n=\frac{2log(log(n))}{n}$,criteria
=``HQIC''
%\item The generalized minimum description length (gmdl) criteria: \cr
%
% \eqn{gmdl(k_n)=log \left[ \frac{1}{n-k_n}\sum_{i=1}^{n}{\Big(y_i-\hat{y}_i\Big)^2} \right] +K_n log \left(\frac{(n-k_n)\sum_{i=1}^{n}\hat{y}_i^2}{{\sum_{i=1}^{n}\Big(y_i-\hat{y}_i\Big)^2} }\right)+log(n) }
%{MSC(k_n)=log [ 1/(n-k_n) \sum_(i=1:n){ (y_i- < X_i , \beta_(i,k_n) > )^2} ] +p_n k_n/n }
%\item The rho criteria: \eqn{rho(k_n)=log \left[ \frac{1}{n-k_n}\sum_{i=1}^{n}\left(\frac{y_i-\hat{y}_i}{1-H_{ii}} \right)^2\right]}criteria
is an argument that controls the type of validation used in the selection of the smoothing parameter kmax
$=k_n$ and penalized parameter lambda
$=\lambda$.fregre.pc
.data(tecator)
x<-tecator$absorp.fdata[1:129]
y<-tecator$y$Fat[1:129]
# no penalization
res.pc1=fregre.pc.cv(x,y,8)
# 2nd derivative penalization
res.pc2=fregre.pc.cv(x,y,8,lambda=TRUE,P=c(0,0,1))
#Ridge regression
res.pc3=fregre.pc.cv(x,y,1:8,lambda=TRUE,P=1)
Run the code above in your browser using DataLab