50% off | Unlimited Data & AI Learning

Last chance! 50% off unlimited learning

Sale ends in


FRegSigCom (version 0.3.0)

cv.msof: Cross-validation for linear multivariate scalar-on-function regression

Description

This function is used to perform cross-validation and build the final model using the signal compression approach for the following linear multivariate scalar-on-function regression model: \boldY=\boldμ+i=1paibiXi(s)\boldβi(s)ds+\boldϵ, where \boldY is an m-dimensional multivariate response variable, \boldμ is the m-dimensional intercept vector. The {Xi(s),1ip} are p functional predictors and {\boldβi(s),1ip} are their corresponding m-dimensional vector of coefficient functions, where p is a positive integer. The \boldϵ is the random noise vector.

We require that all the sample curves of each functional predictor are observed in a common dense grid of time points, but the grid can be different for different predictors. All the sample curves of the functional response are observed in a common dense grid.

Usage

cv.msof(X, Y, t.x.list, nbasis = 50, K.cv = 5, upper.comp = 10,
        thresh = 0.001)

Arguments

X

a list of length p, the number of functional predictors. Its i-th element is the n×mi data matrix for the i-th functional predictor Xi(s), where n is the sample size and mi is the number of observation time points for Xi(s).

Y

an n×q data matrix for the response Y or a n-dimensional vector if there is only one scalar response, where n is the sample size, and q is the number of scalar response variables.

t.x.list

a list of length p. Its i-th element is the vector of observation time points of the i-th functional predictor Xi(s), 1ip.

nbasis

the number of basis functions used for estimating the vector of functions ψik(s) (see the reference for details). Default is 50.

K.cv

the number of CV folds. Default is 5.

upper.comp

the upper bound for the maximum number of components to be calculated. Default is 10.

thresh

a number between 0 and 1 used to determine the maximum number of components we need to calculate. The maximum number is between one and the "upp.comp" above. The optimal number of components will be chosen between 1 and this maximum number, together with other tuning parameters by cross-validation. A smaller thresh value leads to a larger maximum number of components and a longer running time. A larger thresh value needs less running time, but may miss some important components and lead to a larger prediction error. Default is 0.001.

Value

An object of the ``cv.msof'' class, which is used in the function pred.msof for prediction.

fitted_model

a list containing information about fitted model.

is_Y_vector

a logic value indicating whether Y is a vector.

Y

input data Y.

x.smooth.params

a list for internal use.

Details

We use the decomposition \boldβi(s)=k=1Kαki(s)\boldwk, 1ip, based on the KL expansion of i=1pXi(s)\boldβi(s)ds. Let \boldY=(Y,1,...,Y,m)T and \boldX(s)=(X,1(s),...,X,p(s))T, 1n, denote n independent samples. We estimate \boldαk(s)=(αk1(s),...,αkp(s))T for each k by solving the panelized generalized functional eigenvalue problem maxα\boldα(s)T\boldB^(s,s)\boldα(s)dsds\boldα(s)T\boldΣ^(s,s)\boldα(s)dsds+P(\boldα) s.t.\boldα(s)TΣ^(s,s)\boldα(s)dsds=1 and\boldα(s)T\boldΣ^(s,s)\boldαk(s)dsds=0fork<k where \boldB^(s,s)==1n=1n{\boldX(s)\boldX¯(s)}{\boldY\boldY¯}T{\boldY\boldY¯}{\boldX(s)\boldX¯(s)}T/n2, \boldΣ^(s,s)==1n{\boldX(s)\boldX¯(s)}{\boldX(s)\boldX¯(s)}T/n, and penalty P(\boldα)=λi=1p{||αi||2+τ||αi||2}. Then we estimate {wk(t),k>0} by regressing {\boldY} on {z^,1,...z^,K} using least square method. Here z^,k=(\boldX(s)\boldX¯(s))T\boldα^k(s)ds.

References

Ruiyan Luo and Xin Qi (Submitted)

Examples

Run this code
# NOT RUN {
#########################################################################
# Example: multiple scalar-on-function regression
#########################################################################


ptm <- proc.time()
library(FRegSigCom)
data(corn)
X=corn$X
Y=corn$Y
ntrain=60 # in paper, we use 80 observations as training data
xtrange=c(0,1) # the range of t in x(t).
t.x.list=list(seq(0,1,length.out=ncol(X)))
train.index=sample(1:nrow(X), ntrain)
X.train <- X.test <- list()
X.train[[1]]=X[train.index,]
X.test[[1]]=X[-(train.index),]
Y.train <- Y[train.index,]
Y.test <- Y[-(train.index),]

fit.cv.1=cv.msof(X.train, Y.train, t.x.list)# the cv procedure for our method
Y.pred=pred.msof(fit.cv.1, X.test) # make prediction on the test data

pred.error=mean((Y.pred-Y.test)^2)
print(c("pred.error=",pred.error))

print(proc.time()-ptm)

# }

Run the code above in your browser using DataLab