pls (version 1.2-1)

MSEP: MSEP, RMSEP and R2 of PLSR and PCR models

Description

Functions to estimate the mean squared error of prediction (MSEP), root mean squared error of prediction (RMSEP) and $R^2$ for fitted PCR and PLSR models. Test-set, cross-validation and calibration-set estimates are implemented.

Usage

MSEP(object, ...)
## S3 method for class 'mvr':
MSEP(object, estimate, newdata, comps = 1:object$ncomp,
     cumulative = TRUE, intercept = cumulative, se = FALSE, \dots)

RMSEP(object, ...) ## S3 method for class 'mvr': RMSEP(object, ...)

R2(object, estimate, newdata, comps = 1:object$ncomp, cumulative = TRUE, intercept = cumulative, se = FALSE, ...)

Arguments

object
an mvr object
estimate
a character vector. Which estimators to use. Should be a subset of c("all", "train", "CV", "adjCV", "test"). "adjCV" is only available for (R)MSEP. See below for how the estimators are chosen.
newdata
a data frame with test set data.
comps
a vector of positive integers. The components or number of components to use. See below.
cumulative
logical. See below.
intercept
logical. Whether estimates for a model with zero components should be returned as well.
se
logical. Whether estimated standard errors of the estimates should be calculated. Not implemented yet.
...
further arguments sent to underlying functions or (for RMSEP) to MSEP

Value

  • An object of class "mvrVal", with components
  • valthree-dimensional array of estimates. The first dimension is the different estimators, the second is the response variables and the third is the models.
  • type"MSEP", "RMSEP" or "R2".
  • compsthe components specified, with 0 prepended if intercept is TRUE.
  • callthe function call

encoding

latin1

Details

RMSEP simply calls MSEP and takes the square root of the estimates. It therefore accepts the same arguments as MSEP.

Several estimators can be used. "train" is the training or calibration data estimate, also called (R)MSEC. For R2, this is the unadjusted $R^2$. It is overoptimistic and should not be used for assessing models. "CV" is the cross-validation estimate, and "adjCV" (for RMSEP and MSEP) is the bias-corrected cross-validation estimate. They can only be calculated if the model has been cross-validated. Finally, "test" is the test set estimate, using newdata as test set.

Which estimators to use is decided as follows. If estimate is not specified, the test set estimate is returned if newdata is specified, otherwise the CV and adjusted CV (for RMSEP and MSEP) estimates if the model has been cross-validated, otherwise the training data estimate. If estimate is "all", all possible estimates are calculated. Otherwise, the specified estimates are calculated.

Several model sizes can also be specified. If cumulative is TRUE (default), length(comps) models are used, with comps[1] components, ..., comps[length(comps)] components. Otherwise, a single model with the components comps[1], ..., comps[length(comps)] is used.

If intercept is TRUE, a model with zero components is also used (in addition to the above). For R2, this is simply defined as 0.

References

Mevik, B.-H., Cederkvist, H. R. (2004) Mean Squared Error of Prediction (MSEP) Estimates for Principal Component Regression (PCR) and Partial Least Squares Regression (PLSR). Journal of Chemometrics, 18(9), 422--429.

See Also

mvr, crossval, mvrCv, validationplot, plot.mvrVal

Examples

Run this code
data(sensory)
mod <- plsr(Panel ~ Quality, ncomp = 4, data = sensory, validation = "LOO")
RMSEP(mod)
plot(R2(mod))

Run the code above in your browser using DataCamp Workspace