svm
is used to train a support vector machine. It can be used to carry
out general regression and classification (of nu and epsilon-type), as
well as density-estimation. A formula interface is provided.## S3 method for class 'formula':
svm(formula, data = NULL, ...)
## S3 method for class 'default':
svm(x, y=NULL, type=NULL, kernel="radial", degree=3, gamma=1/dim(x)[2],
coef0=0, cost=1, nu=0.5, class.weights=NULL, cachesize=40, tolerance=0.001, epsilon=0.5,
shrinking=TRUE, cross=0, ...)
x
. Can be either
a factor (for classification tasks) or a numeric vector (for regression).svm
can be used as a classification
machine, as a regresson machine or a density estimator. Depending of whether y
is
a factor or not, the default setting for svm.type
is C-classification
or polynomial
(default: 3)linear
(default: 1/(data dimension))polynomial
and sigmoid
(default: 0)nu-classification
and one-classification
svm.default
."svm"
containing the fitted model, especially:summary
and print
to get some output).predict.svm
data(iris)
attach(iris)
## classification mode
# default with factor response:
model <- svm (Species~., data=iris)
# alternatively the traditional interface:
x <- subset (iris, select = -Species)
y <- Species
model <- svm (x, y)
print (model)
summary (model)
# test with train data
pred <- predict (model, x)
# Check accuracy:
table (pred,y)
## try regression mode on two dimensions
# create data
x <- seq (0.1,5,by=0.05)
y <- log(x) + rnorm (x, sd=0.2)
# estimate model and predict input values
m <- svm (x,y)
new <- predict (m,x)
# visualize
plot (x,y)
points (x, log(x), col=2)
points (x, new, col=4)
## density-estimation
# create 2-dim. normal with rho=0:
X <- data.frame (a=rnorm (1000), b=rnorm (1000))
attach (X)
# traditional way:
m <- svm (X)
# formula interface:
m <- svm (~a+b)
# or:
m <- svm (~., data=X)
# test:
predict (m, t(c(0,0)))
predict (m, t(c(4,4)))
# visualization:
plot (X)
points (X[m$index,], col=2)
Run the code above in your browser using DataLab