Learn R Programming

kernlab (version 0.6-2)

ksvm: Support Vector Machines

Description

Support Vector Machines are an excellent tool for classification novelty detection as well as regression. ksvm supports the well known C-svc, nu-svc, (classification) one-class-svc (novelty) eps-svr, nu-svr (regression) formulations along with the Crammer-Singer for multi-class classification formulation spoc-svc and bound-constraint SVM C-bsvc, eps-bsvr. The implementation also supports class-probabilities output and confidence intervals for regression.

Usage

## S3 method for class 'formula':
ksvm(x, data = NULL, ..., subset, na.action = na.omit, scaled = TRUE)

## S3 method for class 'vector': ksvm(x, ...)

## S3 method for class 'matrix': ksvm(x, y = NULL, scaled = TRUE, type = NULL, kernel ="rbfdot", kpar = list(sigma = 0.1), C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE, class.weights = NULL, cachesize = 40, tol = 0.001, shrinking = TRUE, cross = 0, fit = TRUE, ..., subset, na.action = na.omit)

Arguments

x
a symbolic description of the model to be fit. Note, that the intercept is always excluded, whether given in the formula or not. When not using a formula x is a matrix or vector containg the variables in the model
data
an optional data frame containing the variables in the model. By default the variables are taken from the environment which `ksvm' is called from.
y
a response vector with one label for each row/component of x. Can be either a factor (for classification tasks) or a numeric vector (for regression).
scaled
A logical vector indicating the variables to be scaled. If scaled is of length 1, the value is recycled as many times as needed and all non-binary variables are scaled. Per default, data are scaled internally (both x
type
ksvm can be used for classification , for regression, or for novelty detection. Depending on whether y is a factor or not, the default setting for type is C-svc or eps-svr, respe
kernel
the kernel function used in training and predicting. This parameter can be set to any function, of class kernel, which computes a dot product between two vector arguments. kernlab provides the most popular kernel functions which can be used by
kpar
the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. For valid parameters for existing kernels are :
  • sigmainverse kernel width for the Radial B
C
cost of constraints violation (default: 1)---it is the `C'-constant of the regularization term in the Lagrange formulation.
nu
parameter needed for nu-svc, one-svc, and nu-svr. The nu parameter sets the upper bound on the training error and the lower bound on the fraction of data points to become Support Vectors (def
epsilon
epsilon in the insensitive-loss function used for eps-svr, nu-svr and eps-bsvm (default: 0.1)
prob.model
if set to TRUE builds a model for calculating class probabilities or in case of regression, calculates the scaling parameter of the Laplacian distribution fitted on the residuals. Fitting is done on output data created by perform
class.weights
a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named.
cachesize
cache memory in MB (default 40)
tol
tolerance of termination criterion (default: 0.001)
shrinking
option whether to use the shrinking-heuristics (default: TRUE)
cross
if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the accuracy rate for classification and the Mean Squared Error for regression
fit
indicates whether the fitted values should be computed and included in the model or not (default: TRUE)
...
additional parameters for the low level fitting function
subset
An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.)
na.action
A function to specify the action to be taken if NAs are found. The default action is na.omit, which leads to rejection of cases with missing values on any required variable. An alternative is na.fail<

Value

  • An S4 object of class "ksvm" containing the fitted model, Accessor functions can be used to access the slots of the object (see examples) which include:
  • alphaThe resulting support vectors, (alpha vector) (possibly scaled).
  • alphaindexThe index of the resulting support vectors in the data matrix. Note that this index refers to the pre-processed data (after the possible effect of na.omit and subset)
  • coefsThe corresponding coefficients times the training labels.
  • bThe negative intercept.
  • nSVThe number of Support Vectors
  • errorTraining error
  • crossCross validation error, (when cross > 0)
  • prob.modelContains the width of the Laplacian fitted on the residuals in case of regression, or the parameters of the sigmoid fitted on the decision values in case of classification.

Details

For multiclass-classification with k levels, k>2, ksvm uses the `one-against-one'-approach, in which k(k-1)/2 binary classifiers are trained; the appropriate class is found by a voting scheme. If the predictor variables include factors, the formula interface must be used to get a correct model matrix. In classification when prob.model is TRUE a 3-fold cross validation is performed on the data and a sigmoid function is fitted on the resulting decision values $f$. The plot function for binary classification ksvm objects displays a contour plot of the decision values with the corresponding support vectors highlighted. The predict function can return probabilistic output (probability matrix) in the case of classification by setting the type parameter to "probabilities".

References

  • Chang, Chih-Chung and Lin, Chih-Jen: LIBSVM: a library for Support Vector Machines http://www.csie.ntu.edu.tw/~cjlin/libsvm
  • Exact formulations of models, algorithms, etc. can be found in the document: Chang, Chih-Chung and Lin, Chih-Jen: LIBSVM: a library for Support Vector Machines http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.ps.gz
  • J. Platt Probabilistic outputs for support vector machines and comparison to regularized likelihood methods Advances in Large Margin Classifiers, A. Smola, P. Bartlett, B. Schoelkopf and D. Schuurmans, Eds. Cambridge, MA: MIT Press, 2000. http://citeseer.nj.nec.com/platt99probabilistic.html
  • H.-T. Lin, C.-J. Lin and R. C. Weng A note on Platt's probabilistic outputs for support vector machines http://www.csie.ntu.edu.tw/~cjlin/papers/plattprob.ps
  • C.-W. Hsu and C.-J. Lin A comparison on methods for multi-class support vector machines IEEE Transactions on Neural Networks, 13(2002) 415-425. http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.ps.gz
  • C.-W. Hsu and C.-J. Lin. A simple decomposition method for support vector machines Machine Learning 46(2002), 291-314. http://www.csie.ntu.edu.tw/~cjlin/papers/decomp.ps.gz
  • K. Crammer, Y. Singer On the learnability and design of output codes for multiclass prolems Computational Learning Theory, 35-46, 2000. http://www.cs.huji.ac.il/~kobics/publications/mlj01.ps.gz

See Also

predict.ksvm, couple

Examples

Run this code
## simple example using the spam data set
data(spam)


## create test and training set
index <- sample(1:dim(spam)[1])
spamtrain <- spam[index[1:floor(2 * dim(spam)[1]/3)], ]
spamtest <- spam[index[((2 * ceiling(dim(spam)[1]/3)) + 1):dim(spam)[1]], ]

## train a support vector machine
filter <- ksvm(type~.,data=spamtrain,kernel="rbfdot",kpar=list(sigma=0.05),C=5,cross=3)
filter

## predict mail type on the test set
mailtype <- predict(filter,spamtest[,-58])

## Check results
table(mailtype,spamtest[,58])


## Another example with the famous iris data
data(iris)

## Create a kernel function using the build in rbfdot function
rbf <- rbfdot(sigma=0.1)
rbf

## train a bound constraint support vector machine
irismodel <- ksvm(Species~.,data=iris,type="C-bsvc",kernel=rbf,C=10,prob.model=TRUE)

irismodel

## get fitted values
fit(irismodel)

## Test on the training set with probabilities as output
predict(irismodel, iris[,-5], type="probabilities")


## Demo of the plot function
x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2))
y <- matrix(c(rep(1,60),rep(-1,60)))

svp <- ksvm(x,y,type="C-svc")
plot(svp)



#### Use custom kernel 

k <- function(x,y) {(sum(x*y) +1)*exp(0.001*sum((x-y)^2))}
class(k) <- "kernel"

data(promotergene)

## train svm using custom kernel
gene <- ksvm(Class~.,data=promotergene,kernel=k,C=10,cross=5)

gene

## regression
# create data
x <- seq(-20,20,0.1)
y <- sin(x)/x + rnorm(401,sd=0.03)

# train support vector machine
regm <- ksvm(x,y,epsilon=0.01,kpar=list(sigma=16),cross=3)
plot(x,y,type="l")
lines(x,predict(regm,x),col="red")

Run the code above in your browser using DataLab