Learn R Programming

plsRglm (version 0.7.4)

PLS_glm_kfoldcv_formula: Partial least squares regression glm models with kfold cross validation

Description

This function implements kfold cross validation on complete or incomplete datasets for partial least squares regression generalized linear models (formula specification of the model).

Usage

PLS_glm_kfoldcv_formula(formula,data=NULL,nt=2,limQ2set=.0975,modele="pls", family=NULL, K=nrow(dataX), NK=1, grouplist=NULL, random=FALSE, scaleX=TRUE, scaleY=NULL, keepcoeffs=FALSE, keepfolds=FALSE, keepdataY=TRUE, keepMclassed=FALSE, tol_Xi=10^(-12),weights,subset,start=NULL,etastart,mustart,offset,method,control= list(),contrasts=NULL)

Arguments

formula
an object of class "formula" (or one that can be coerced to that class): a symbolic description of the model to be fitted. The details of model specification are given under 'Details'.
data
an optional data frame, list or environment (or object coercible by as.data.frame to a data frame) containing the variables in the model. If not found in data, the variables are taken fro
nt
number of components to be extracted
limQ2set
limit value for the Q2
modele
name of the PLS glm model to be fitted ("pls", "pls-glm-Gamma", "pls-glm-gaussian", "pls-glm-inverse.gaussian", "pls-glm-logistic", "pls-glm-poisson", "pls-glm-polr"
family
a description of the error distribution and link function to be used in the model. This can be a character string naming a family function, a family function or the result of a call to a family function. (See fami
K
number of groups
NK
number of times the group division is made
grouplist
to specify the members of the K groups
random
should the K groups be made randomly
scaleX
scale the predictor(s) : must be set to TRUE for modele="pls" and should be for glms pls.
scaleY
scale the response : Yes/No. Ignored since non always possible for glm responses.
keepcoeffs
shall the coefficients for each model be returned
keepfolds
shall the groups' composition be returned
keepdataY
shall the observed value of the response for each one of the predicted value be returned
keepMclassed
shall the number of miss classed be returned (unavailable)
tol_Xi
minimal value for Norm2(Xi) and $\mathrm{det}(pp' \times pp)$ if there is any missing value in the dataX. It defaults to $10^{-12}$
weights
an optional vector of 'prior weights' to be used in the fitting process. Should be NULL or a numeric vector.
subset
an optional vector specifying a subset of observations to be used in the fitting process.
start
starting values for the parameters in the linear predictor.
etastart
starting values for the linear predictor.
mustart
starting values for the vector of means.
offset
this can be used to specify an a priori known component to be included in the linear predictor during fitting. This should be NULL or a numeric vector of length equal to the number of cases. One or more
method
[object Object],[object Object]
control
a list of parameters for controlling the fitting process. For glm.fit this is passed to glm.control.
contrasts
an optional list. See the contrasts.arg of model.matrix.default.

Value

  • results_kfoldslist of NK. Each element of the list sums up the results for a group division: [object Object],[object Object],[object Object]
  • foldslist of NK. Each element of the list sums up the informations for a group division: [object Object],[object Object],[object Object]
  • dataY_kfoldslist of NK. Each element of the list sums up the results for a group division: [object Object],[object Object],[object Object]
  • callthe call of the function

Details

Predicts 1 group with the K-1 other groups. Leave one out cross validation is thus obtained for K==nrow(dataX). There are seven different predefined models with predefined link functions available : [object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object] Using the "family=" option and setting "modele=pls-glm-family" allows changing the family and link function the same way as for the glm function. As a consequence user-specified families can also be used. [object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object] A typical predictor has the form response ~ terms where response is the (numeric) response vector and terms is a series of terms which specifies a linear predictor for response. A terms specification of the form first + second indicates all the terms in first together with all the terms in second with any duplicates removed. A specification of the form first:second indicates the the set of terms obtained by taking the interactions of all terms in first with all terms in second. The specification first*second indicates the cross of first and second. This is the same as first + second + first:second. The terms in the formula will be re-ordered so that main effects come first, followed by the interactions, all second-order, all third-order and so on: to avoid this pass a terms object as the formula. Non-NULL weights can be used to indicate that different observations have different dispersions (with the values in weights being inversely proportional to the dispersions); or equivalently, when the elements of weights are positive integers w_i, that each response y_i is the mean of w_i unit-weight observations.

References

Nicolas Meyer, Myriam Maumy-Bertrand et Fr�d�ric{Fr'ed'eric} Bertrand (2010). Comparaison de la r�gression{r'egression} PLS et de la r�gression{r'egression} logistique PLS : application aux donn�es{donn'ees} d'all�lotypage{d'all'elotypage}. Journal de la Soci�t� Fran�aise de Statistique, 151(2), pages 1-18. http://smf4.emath.fr/Publications/JSFdS/151_2/pdf/sfds_jsfds_151_2_1-18.pdf

See Also

kfolds2coeff, kfolds2Pressind, kfolds2Press, kfolds2Mclassedind, kfolds2Mclassed and kfolds2CVinfos_glm to extract and transform results from kfold cross validation.

Examples

Run this code
data(Cornell)
bbb <- PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=10,NK=1,modele="pls")
kfolds2CVinfos_glm(bbb)

PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-gaussian",K=12)
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-gaussian",K=6,NK=2,random=TRUE,keepfolds=TRUE)$results_kfolds

#Different ways of model specifications
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-gaussian",K=6,NK=2,random=FALSE,keepfolds=TRUE)$results_kfolds
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-family",family=gaussian,K=6,NK=2,random=FALSE,keepfolds=TRUE)$results_kfolds
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-family",family=gaussian(),K=6,NK=2,random=FALSE,keepfolds=TRUE)$results_kfolds
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-family",family=gaussian(link=log),K=6,NK=2,random=FALSE,keepfolds=TRUE)$results_kfolds

bbb2 <- PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=10,modele="pls-glm-gaussian",keepcoeffs=TRUE)
bbb2 <- PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-family",family=gaussian(link=log),K=6,keepcoeffs=TRUE)

#For Jackknife computations
kfolds2coeff(bbb2)
boxplot(kfolds2coeff(bbb2)[,1])

kfolds2Chisqind(bbb2)
kfolds2Chisq(bbb2)
kfolds2CVinfos_glm(bbb2)
PLS_lm_formula(log(Y)~.,data=Cornell,10,typeVC="standard")$CVinfos
rm(list=c("bbb","bbb2"))


data(pine)
bbb <- PLS_glm_kfoldcv_formula(x11~.,data=pine,nt=10,modele="pls-glm-family",family=gaussian(log),K=10,keepcoeffs=TRUE,keepfolds=FALSE)
bbb <- PLS_glm_kfoldcv_formula(x11~.,data=pine,nt=10,modele="pls-glm-gaussian",K=10,keepcoeffs=TRUE,keepfolds=FALSE)

#For Jackknife computations
kfolds2coeff(bbb)
boxplot(kfolds2coeff(bbb)[,1])

kfolds2Chisqind(bbb)
kfolds2Chisq(bbb)
kfolds2CVinfos_glm(bbb)
PLS_lm_formula(log(x11)~.,data=pine,nt=10,typeVC="standard")$CVinfos

pineNAX21 <- pine
pineNAX21[1,2] <- NA
bbb2 <- PLS_glm_kfoldcv_formula(x11~.,data=pineNAX21,nt=10,modele="pls-glm-family",family=gaussian(log),K=10,keepcoeffs=TRUE,keepfolds=FALSE)
bbb2 <- PLS_glm_kfoldcv_formula(x11~.,data=pineNAX21,nt=10,modele="pls-glm-gaussian",K=10,keepcoeffs=TRUE,keepfolds=FALSE)

#For Jackknife computations
kfolds2coeff(bbb2)
boxplot(kfolds2coeff(bbb2)[,1])

kfolds2Chisqind(bbb2)
kfolds2Chisq(bbb2)
kfolds2CVinfos_glm(bbb2)
PLS_lm_formula(log(x11)~.,data=pineNAX21,nt=10,typeVC="standard")$CVinfos
rm(list=c("pineNAX21","bbb","bbb2"))


data(aze_compl)
bbb <- PLS_glm_kfoldcv_formula(y~.,data=aze_compl,nt=10,K=10,modele="pls",keepcoeffs=TRUE)

#For Jackknife computations
kfolds2coeff(bbb)
bbb2 <- PLS_glm_kfoldcv_formula(y~.,data=aze_compl,nt=3,K=10,modele="pls-glm-family",family=binomial(probit),keepcoeffs=TRUE)
bbb2 <- PLS_glm_kfoldcv_formula(y~.,data=aze_compl,nt=3,K=10,modele="pls-glm-logistic",keepcoeffs=TRUE)
kfolds2CVinfos_glm(bbb,MClassed=TRUE)
kfolds2CVinfos_glm(bbb2,MClassed=TRUE)
kfolds2coeff(bbb2)

kfolds2Chisqind(bbb2)
kfolds2Chisq(bbb2)
kfolds2CVinfos_glm(bbb2)
rm(list=c("bbb","bbb2"))



data(pine)
bbb <- PLS_glm_kfoldcv_formula(round(x11)~.,data=pine,nt=10,modele="pls-glm-family",family=poisson(log),K=10,keepcoeffs=TRUE,keepfolds=FALSE)
bbb <- PLS_glm_kfoldcv_formula(round(x11)~.,data=pine,nt=10,modele="pls-glm-poisson",K=10,keepcoeffs=TRUE,keepfolds=FALSE)

#For Jackknife computations
kfolds2coeff(bbb)
boxplot(kfolds2coeff(bbb)[,1])

kfolds2Chisqind(bbb)
kfolds2Chisq(bbb)
kfolds2CVinfos_glm(bbb)
PLS_lm_formula(log(x11)~.,data=pine,10,typeVC="standard")$CVinfos

pineNAX21 <- pine
pineNAX21[1,2] <- NA
bbb2 <- PLS_glm_kfoldcv_formula(round(x11)~.,data=pineNAX21,nt=10,modele="pls-glm-family",family=poisson(log),K=10,keepcoeffs=TRUE,keepfolds=FALSE)
bbb2 <- PLS_glm_kfoldcv_formula(round(x11)~.,data=pineNAX21,nt=10,modele="pls-glm-poisson",K=10,keepcoeffs=TRUE,keepfolds=FALSE)

#For Jackknife computations
kfolds2coeff(bbb2)
boxplot(kfolds2coeff(bbb2)[,1])

kfolds2Chisqind(bbb2)
kfolds2Chisq(bbb2)
kfolds2CVinfos_glm(bbb2)
PLS_lm_formula(log(x11)~.,data=pineNAX21,10,typeVC="standard")$CVinfos
rm(list=c("pineNAX21","bbb","bbb2"))



data(pine)
bbb <- PLS_glm_kfoldcv_formula(x11~.,data=pine,nt=10,modele="pls-glm-family",family=Gamma,K=10,keepcoeffs=TRUE,keepfolds=FALSE)
bbb <- PLS_glm_kfoldcv_formula(x11~.,data=pine,nt=10,modele="pls-glm-Gamma",K=10,keepcoeffs=TRUE,keepfolds=FALSE)

#For Jackknife computations
kfolds2coeff(bbb)
boxplot(kfolds2coeff(bbb)[,1])

kfolds2Chisqind(bbb)
kfolds2Chisq(bbb)
kfolds2CVinfos_glm(bbb)
PLS_lm_formula(log(x11)~.,data=pine,10,typeVC="standard")$CVinfos

pineNAX21 <- pine
pineNAX21[1,2] <- NA
bbb2 <- PLS_glm_kfoldcv_formula(x11~.,data=pineNAX21,nt=10,modele="pls-glm-family",family=Gamma(),K=10,keepcoeffs=TRUE,keepfolds=FALSE)
bbb2 <- PLS_glm_kfoldcv_formula(x11~.,data=pineNAX21,nt=10,modele="pls-glm-Gamma",K=10,keepcoeffs=TRUE,keepfolds=FALSE)

#For Jackknife computations
kfolds2coeff(bbb2)
boxplot(kfolds2coeff(bbb2)[,1])

kfolds2Chisqind(bbb2)
kfolds2Chisq(bbb2)
kfolds2CVinfos_glm(bbb2)
PLS_lm_formula(log(x11)~.,data=pineNAX21,10,typeVC="standard")$CVinfos
rm(list=c("pineNAX21","bbb","bbb2"))



data(Cornell)
bbb <- PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=10,NK=1,modele="pls")
kfolds2CVinfos_glm(bbb)

PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-inverse.gaussian",K=12)
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-family",family=inverse.gaussian,K=12)
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-inverse.gaussian",K=6,NK=2,random=TRUE,keepfolds=TRUE)$results_kfolds
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-family",family=inverse.gaussian(),K=6,NK=2,random=TRUE,keepfolds=TRUE)$results_kfolds
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-inverse.gaussian",K=6,NK=2,random=FALSE,keepfolds=TRUE)$results_kfolds
PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=3,modele="pls-glm-family",family=inverse.gaussian(link = "1/mu^2"),K=6,NK=2,random=FALSE,keepfolds=TRUE)$results_kfolds

bbb2 <- PLS_glm_kfoldcv_formula(Y~.,data=Cornell,nt=10,modele="pls-glm-inverse.gaussian",keepcoeffs=TRUE)

#For Jackknife computations
kfolds2coeff(bbb2)
boxplot(kfolds2coeff(bbb2)[,1])

kfolds2Chisqind(bbb2)
kfolds2Chisq(bbb2)
kfolds2CVinfos_glm(bbb2)
PLS_lm_formula(log(Y)~.,data=Cornell,10,typeVC="standard")$CVinfos
rm(list=c("bbb","bbb2"))


data(bordeaux)
bbb <- PLS_glm_kfoldcv_formula(Quality~.,data=bordeaux,10,modele="pls-glm-polr",K=7)
kfolds2CVinfos_glm(bbb)

bordeauxNA<-bordeaux
bordeauxNA[1,1] <- NA
bbbNA <- PLS_glm_kfoldcv_formula(Quality~Temperature+Sunshine+Heat+Rain,data=bordeauxNA,10,modele="pls-glm-polr",K=10)
kfolds2CVinfos_glm(bbbNA)
rm(list=c("bbb","bbbNA"))

bbb2 <- PLS_glm_kfoldcv_formula(Quality~.,data=bordeaux,nt=2,K=7,modele="pls-glm-polr",method="logistic")
bbb3 <- PLS_glm_kfoldcv_formula(Quality~.,data=bordeaux,nt=2,K=7,modele="pls-glm-polr",method="probit")
bbb4 <- PLS_glm_kfoldcv_formula(Quality~.,data=bordeaux,nt=2,K=7,modele="pls-glm-polr",method="cloglog")
bbb5 <- PLS_glm_kfoldcv_formula(Quality~.,data=bordeaux,nt=2,K=7,modele="pls-glm-polr",method="cauchit")

kfolds2CVinfos_glm(bbb2)
kfolds2CVinfos_glm(bbb3)
kfolds2CVinfos_glm(bbb4)
kfolds2CVinfos_glm(bbb5)
rm(list=c("bbb","bbbNA","bbb2","bbb3","bbb4","bbb5"))

Run the code above in your browser using DataLab