scadsvc
Fit SCAD SVM model
SVM with variable selection (clone selection) using SCAD penalty.
Usage
scadsvc(lambda1 = 0.01, x, y, a = 3.7, tol= 10^(-4), class.weights= NULL,
seed=123, maxIter=700, verbose=TRUE)
Arguments
- lambda1
tuning parameter in SCAD function (default : 0.01)
- x
n-by-d data matrix to train (n chips/patients, d clones/genes)
- y
vector of class labels -1 or 1\'s (for n chips/patiens )
- a
tuning parameter in scad function (default: 3.7)
- tol
the cut-off value to be taken as 0
- class.weights
a named vector of weights for the different classes, used for asymetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named. (default: NULL)
- seed
seed
- maxIter
maximal iteration, default: 700
- verbose
verbose, default: TRUE
Details
Adopted from Matlab code: http://www4.stat.ncsu.edu/~hzhang/software.html
Value
coefficients of the hyperplane.
intercept of the hyperplane.
the index of the selected features (genes) in the data matrix.
internal calculations product \(xqx = 0.5 * x1 * inv_Q * t(x1)\), see code for more details.
fit of hyperplane f(x) for all _training_ samples with reduced set of features.
the index of the resulting support vectors in the data matrix.
type of svm, from svm function.
optimal lambda1.
corresponding gacv.
nuber of iterations.
References
Zhang, H. H., Ahn, J., Lin, X. and Park, C. (2006). Gene selection using support vector machines with nonconvex penalty. Bioinformatics, 22, pp. 88-95.
See Also
Examples
# NOT RUN {
# simulate data
train<-sim.data(n = 200, ng = 100, nsg = 10, corr=FALSE, seed=12)
print(str(train))
# train data
model <- scadsvc(as.matrix(t(train$x)), y=train$y, lambda=0.01)
print(str(model))
print(model)
# }