Fit a SGPLS classification model.
sgpls( x, y, K, eta, scale.x=TRUE,
eps=1e-5, denom.eps=1e-20, zero.eps=1e-5, maxstep=100,
br=TRUE, ftype='iden' )
Matrix of predictors.
Vector of class indices.
Number of hidden components.
Thresholding parameter. eta
should be between 0 and 1.
Scale predictors by dividing each predictor variable by its sample standard deviation?
An effective zero for change in estimates. Default is 1e-5.
An effective zero for denominators. Default is 1e-20.
An effective zero for success probabilities. Default is 1e-5.
Maximum number of Newton-Raphson iterations. Default is 100.
Apply Firth's bias reduction procedure?
Type of Firth's bias reduction procedure.
Alternatives are "iden"
(the approximated version)
or "hat"
(the original version).
Default is "iden"
.
A sgpls
object is returned.
print, predict, coef methods use this object.
The SGPLS method is described in detail in Chung and Keles (2010).
SGPLS provides PLS-based classification with variable selection,
by incorporating sparse partial least squares (SPLS) proposed in Chun and Keles (2010)
into a generalized linear model (GLM) framework.
y
is assumed to have numerical values, 0, 1, ..., G,
where G is the number of classes subtracted by one.
Chung D and Keles S (2010), "Sparse partial least squares classification for high dimensional data", Statistical Applications in Genetics and Molecular Biology, Vol. 9, Article 17.
Chun H and Keles S (2010), "Sparse partial least squares for simultaneous dimension reduction and variable selection", Journal of the Royal Statistical Society - Series B, Vol. 72, pp. 3--25.
print.sgpls
, predict.sgpls
, and coef.sgpls
.
# NOT RUN {
data(prostate)
# SGPLS with eta=0.6 & 3 hidden components
(f <- sgpls(prostate$x, prostate$y, K=3, eta=0.6, scale.x=FALSE))
# Print out coefficients
coef.f <- coef(f)
coef.f[coef.f!=0, ]
# }
Run the code above in your browser using DataLab