MSEP(object, estimate, newdata, comps = 1:object$ncomp,
cumulative = TRUE, intercept = cumulative, se = FALSE, ...)
RMSEP(...)
R2(object, estimate, newdata, comps = 1:object$ncomp,
cumulative = TRUE, intercept = cumulative, se = FALSE, ...)
"mvrVal"
, with components"MSEP"
, "RMSEP"
or "R2"
.0
prepended if
intercept
is TRUE
.RMSEP
simply calls MSEP
and takes the square root of the
estimates. It therefore accepts the same arguments as MSEP
. Several estimators can be used. "train" is the training
or calibration data estimate, also called (R)MSEC. For R2
,
this is the unadjusted $R^2$. It is
overoptimistic and should not be used for assessing models.
"CV" is the cross-validation estimate, and "adjCV" (for
RMSEP
and MSEP
) is
the bias-corrected cross-validation estimate. They can only be
calculated if the model has been cross-validated.
Finally, "test" is the test set estimate, using newdata
as test set.
Which estimators to use is decided as follows. If
estimate
is not specified, the test set estimate is returned if
newdata
is specified, otherwise the CV and adjusted CV (for
RMSEP
and MSEP
)
estimates if the model has been cross-validated, otherwise the
training data estimate. If estimate
is "all", all
possible estimates are calculated. Otherwise, the specified estimates
are calculated.
Several model sizes can also be specified. If cumulative
is
TRUE
(default), length(comps)
models are used, with
comps[1]
components, ..., comps[length(comps)]
components. Otherwise, a single model with the components
comps[1]
, ..., comps[length(comps)]
is used.
If intercept
is TRUE
, a model with zero components is
also used (in addition to the above). For R2
, this is simply
defined as 0.
mvr
, crossval
, mvrCv
,
validationplot
, plot.mvrVal
data(sensory)
mod <- plsr(Panel ~ Quality, ncomp = 4, data = sensory, CV = TRUE,
length.seg = 1)
RMSEP(mod)
plot(R2(mod))
Run the code above in your browser using DataLab