psvm is used to train a support vector machine. It can be used to carry
out general regression and classification (of nu and epsilon-type), as
well as density-estimation. A formula interface is provided.psvm(x, y = NULL, type = "C", kernel = "radial", degree = 3,
gamma = if (is.vector(x)) 1 else 1/ncol(x), coef0 = 0, cost = 1,
nu = 0.5, class.weights = NULL, cachesize = 40, tolerance = 0.001,
epsilon = 0.1, shrinking = TRUE, cross = 0, probability = FALSE,
fitted = TRUE, seed = 1L, scale = TRUE, na.action = na.omit)Matrix provided by the matrix.csr
x. Can be either a factor (for classification tasks)
or a numeric vector (for regression).scale is of length 1, the value is recycled as
many times as needed.
Per default, data are scaled internally (both x and y
variables) to zeropsvm can be used as a classification
machine, as a regression machine, or for novelty detection.
Depending of whether y is
a factor or not, the default setting for type is C-classification orpolynomial (default: 3)linear
(default: 1/(data dimension))polynomial
and sigmoid (default: 0)nu-classification,
nu-regression, and one-classificationTRUE)TRUE)psvm.defaultNAs are
found. The default action is na.omit, which leads to rejection of cases
with missing values on any required variable. An alternative
is na.fail<"psvm" containing the fitted model, including:na.omit and subset)libsvm uses the
libsvm internally uses a sparse data representation, which is
also high-level supported by the package plot.svm allows a simple graphical
visualization of classification models.
The probability model for classification fits a logistic distribution using maximum likelihood to the decision values of all binary classifiers, and computes the a-posteriori class probabilities for the multi-class problem using quadratic optimization. The probabilistic regression model assumes (zero-mean) laplace-distributed errors for the predictions, and estimates the scale parameter using maximum likelihood.
SPRINT
svm