sgpls {spls} | R Documentation |
Fit a SGPLS classification model.
sgpls( x, y, K, eta, scale.x=TRUE, eps=1e-5, denom.eps=1e-20, zero.eps=1e-5, maxstep=100, br=TRUE, ftype='iden' )
x |
Matrix of predictors. |
y |
Vector of class indices. |
K |
Number of hidden components. |
eta |
Thresholding parameter. eta should be between 0 and 1. |
scale.x |
Scale predictors by dividing each predictor variable by its sample standard deviation? |
eps |
An effective zero for change in estimates. Default is 1e-5. |
denom.eps |
An effective zero for denominators. Default is 1e-20. |
zero.eps |
An effective zero for success probabilities. Default is 1e-5. |
maxstep |
Maximum number of Newton-Raphson iterations. Default is 100. |
br |
Apply Firth's bias reduction procedure? |
ftype |
Type of Firth's bias reduction procedure.
Alternatives are "iden" (the approximated version)
or "hat" (the original version).
Default is "iden" . |
The SGPLS method is described in detail in Chung and Keles (2009).
SGPLS provides PLS-based classification with variable selection,
by incorporating sparse partial least squares (SPLS) proposed in Chun and Keles (2009)
into a generalized linear model (GLM) framework.
y
is assumed to have numerical values, 0, 1, ..., G,
where G is the number of classes subtracted by one.
A sgpls
object is returned.
print, predict, coef methods use this object.
Dongjun Chung and Sunduz Keles.
Chung, D. and Keles, S. (2009). "Sparse partial least squares classification for high dimensional data" (http://www.stat.wisc.edu/~keles/Papers/C_SPLS.pdf).
Chun, H. and Keles, S. (2009). "Sparse partial least squares for simultaneous dimension reduction and variable selection", To appear in Journal of the Royal Statistical Society - Series B (http://www.stat.wisc.edu/~keles/Papers/SPLS_Nov07.pdf).
print.sgpls
, predict.sgpls
, and coef.sgpls
.
data(prostate) # SGPLS with eta=0.6 & 3 hidden components f <- sgpls( prostate$x, prostate$y, K=3, eta=0.6, scale.x=FALSE ) print(f) # Print out coefficients coef.f <- coef(f) coef.f[ coef.f!=0, ]