Fits a PLSR model with the wide kernel algorithm.
widekernelpls.fit(X, Y, ncomp, center = TRUE, stripped = FALSE,
tol = .Machine$double.eps^0.5, maxit = 100, …)
a matrix of observations. NA
s and Inf
s are not
allowed.
a vector or matrix of responses. NA
s and Inf
s
are not allowed.
the number of components to be used in the modelling.
logical, determines if the \(X\) and \(Y\) matrices are mean centered or not. Default is to perform mean centering.
logical. If TRUE
the calculations are stripped
as much as possible for speed; this is meant for use with
cross-validation or simulations when only the coefficients are
needed. Defaults to FALSE
.
numeric. The tolerance used for determining convergence in the algorithm.
positive integer. The maximal number of iterations used in the internal Eigenvector calculation.
other arguments. Currently ignored.
A list containing the following components is returned:
an array of regression coefficients for 1, …,
ncomp
components. The dimensions of coefficients
are
c(nvar, npred, ncomp)
with nvar
the number
of X
variables and npred
the number of variables to be
predicted in Y
.
a matrix of scores.
a matrix of loadings.
a matrix of loading weights.
a matrix of Y-scores.
a matrix of Y-loadings.
the projection matrix used to convert X to scores.
a vector of means of the X variables.
a vector of means of the Y variables.
an array of fitted values. The dimensions of
fitted.values
are c(nobj, npred, ncomp)
with
nobj
the number samples and npred
the number of
Y variables.
an array of regression residuals. It has the same
dimensions as fitted.values
.
a vector with the amount of X-variance explained by each component.
Total variance in X
.
If stripped is TRUE, only the components coefficients, Xmeans and Ymeans are returned.
This function should not be called directly, but through
the generic functions plsr
or mvr
with the argument
method="widekernelpls"
. The wide kernel PLS algorithm is
efficient when the number of variables is (much) larger
than the number of observations. For very wide X
, for instance
12x18000, it can be twice as fast as kernelpls.fit
and
simpls.fit
. For other matrices, however, it can be much
slower. The results are equal to the results of the NIPALS algorithm.
R<U+00E4>nnar, S., Lindgren, F., Geladi, P. and Wold, S. (1994) A PLS Kernel Algorithm for Data Sets with Many Variables and Fewer Objects. Part 1: Theory and Algorithm. Journal of Chemometrics, 8, 111--125.