Learn R Programming

chemometrics (version 1.4.1)

svmEval: Support Vector Machine evaluation by CV

Description

Evaluation for Support Vector Machines (SVM) by cross-validation

Usage

svmEval(X, grp, train, kfold = 10, gamvec = seq(0, 10, by = 1), kernel = "radial", degree = 3, plotit = TRUE, legend = TRUE, legpos = "bottomright", ...)

Arguments

X
standardized complete X data matrix (training and test data)
grp
factor with groups for complete data (training and test data)
train
row indices of X indicating training data objects
kfold
number of folds for cross-validation
gamvec
range for gamma-values, see svm
kernel
kernel to be used for SVM, should be one of "radial", "linear", "polynomial", "sigmoid", default to "radial", see svm
degree
degree of polynome if kernel is "polynomial", default to 3, see svm
plotit
if TRUE a plot will be generated
legend
if TRUE a legend will be added to the plot
legpos
positioning of the legend in the plot
...
additional plot arguments

Value

Details

The data are split into a calibration and a test data set (provided by "train"). Within the calibration set "kfold"-fold CV is performed by applying the classification method to "kfold"-1 parts and evaluation for the last part. The misclassification error is then computed for the training data, for the CV test data (CV error) and for the test data.

References

K. Varmuza and P. Filzmoser: Introduction to Multivariate Statistical Analysis in Chemometrics. CRC Press, Boca Raton, FL, 2009.

See Also

svm

Examples

Run this code
data(fgl,package="MASS")
grp=fgl$type
X=scale(fgl[,1:9])
k=length(unique(grp))
dat=data.frame(grp,X)
n=nrow(X)
ntrain=round(n*2/3)
require(e1071)
set.seed(143)
train=sample(1:n,ntrain)
ressvm=svmEval(X,grp,train,gamvec=c(0,0.05,0.1,0.2,0.3,0.5,1,2,5),
  legpos="topright")
title("Support vector machines")

Run the code above in your browser using DataLab