MSEP(object, ...)
## S3 method for class 'mvr':
MSEP(object, estimate, newdata, ncomp = 1:object$ncomp, comps,
intercept = cumulative, se = FALSE, \dots)RMSEP(object, ...)
## S3 method for class 'mvr':
RMSEP(object, ...)
R2(object, estimate, newdata, ncomp = 1:object$ncomp, comps,
intercept = cumulative, se = FALSE, ...)
mvrValstats(object, estimate, newdata, ncomp = 1:object$ncomp, comps,
intercept = cumulative, se = FALSE, ...)
RMSEP simply calls MSEP and takes the square root of the
estimates. It therefore accepts the same arguments as MSEP. Several estimators can be used. "train" is the training
or calibration data estimate, also called (R)MSEC. For R2,
this is the unadjusted $R^2$. It is
overoptimistic and should not be used for assessing models.
"CV" is the cross-validation estimate, and "adjCV" (for
RMSEP and MSEP) is
the bias-corrected cross-validation estimate. They can only be
calculated if the model has been cross-validated.
Finally, "test" is the test set estimate, using newdata
as test set.
Which estimators to use is decided as follows (see below for
mvrValstats). If
estimate is not specified, the test set estimate is returned if
newdata is specified, otherwise the CV and adjusted CV (for
RMSEP and MSEP)
estimates if the model has been cross-validated, otherwise the
training data estimate. If estimate is "all", all
possible estimates are calculated. Otherwise, the specified estimates
are calculated.
Several model sizes can also be specified. If comps is missing
(or is NULL), length(ncomp) models are used, with
ncomp[1] components, ..., ncomp[length(ncomp)]
components. Otherwise, a single model with the components
comps[1], ..., comps[length(comps)] is used.
If intercept is TRUE, a model with zero components is
also used (in addition to the above).
The $R^2$ values returned by "R2" are calculated as $1
- SSE/SST$, where $SST$ is the (corrected) total sum of squares
of the response, and $SSE$ is the sum of squared errors for either
the fitted values (i.e., the residual sum of squares), test set
predictions or cross-validated predictions (i.e., the $PRESS$).
For estimate = "train", this is equivalent to the squared
correlation between the fitted values and the response. For
estimate = "train", the estimate is often called the prediction
$R^2$.
mvrValstats is a utility function that calculates the
statistics needed by MSEP and R2. It is not intended to
be used interactively. It accepts the same arguments as MSEP
and R2. However, the estimate argument must be
specified explicitly: no partial matching and no automatic choice is
made. The function simply calculates the types of estimates it knows,
and leaves the other untouched.
mvr, crossval, mvrCv,
validationplot, plot.mvrValdata(oliveoil)
mod <- plsr(sensory ~ chemical, ncomp = 4, data = oliveoil, validation = "LOO")
RMSEP(mod)
plot(R2(mod))Run the code above in your browser using DataLab