sgpls( x, y, K, eta, scale.x=TRUE, eps=1e-5, denom.eps=1e-20, zero.eps=1e-5, maxstep=100, br=TRUE, ftype='iden' )
Arguments
x
Matrix of predictors.
y
Vector of class indices.
K
Number of hidden components.
eta
Thresholding parameter. eta should be between 0 and 1.
scale.x
Scale predictors by dividing each predictor variable
by its sample standard deviation?
eps
An effective zero for change in estimates. Default is 1e-5.
denom.eps
An effective zero for denominators. Default is 1e-20.
zero.eps
An effective zero for success probabilities. Default is 1e-5.
maxstep
Maximum number of Newton-Raphson iterations.
Default is 100.
br
Apply Firth's bias reduction procedure?
ftype
Type of Firth's bias reduction procedure.
Alternatives are "iden" (the approximated version)
or "hat" (the original version).
Default is "iden".
Value
sgpls object is returned.
print, predict, coef methods use this object.
Details
The SGPLS method is described in detail in Chung and Keles (2010).
SGPLS provides PLS-based classification with variable selection,
by incorporating sparse partial least squares (SPLS) proposed in Chun and Keles (2010)
into a generalized linear model (GLM) framework.
y is assumed to have numerical values, 0, 1, ..., G,
where G is the number of classes subtracted by one.
References
Chung D and Keles S (2010),
"Sparse partial least squares classification for high dimensional data",
Statistical Applications in Genetics and Molecular Biology, Vol. 9, Article 17.
Chun H and Keles S (2010), "Sparse partial least squares
for simultaneous dimension reduction and variable selection",
Journal of the Royal Statistical Society - Series B, Vol. 72, pp. 3--25.