kernlab (version 0.9-0)

lssvm: Least Squares Support Vector Machine

Description

The lssvm function is an implementation of the Least Squares SVM. lssvm includes a reduced version of Least Squares SVM using a decomposition of the kernel matrix which is calculated by the csi function.

Usage

## S3 method for class 'formula':
lssvm(x, data=NULL, ..., subset, na.action = na.omit, scaled = TRUE)

## S3 method for class 'vector': lssvm(x, ...)

## S3 method for class 'matrix': lssvm(x, y = NULL, scaled = TRUE, kernel = "rbfdot", kpar = "automatic", type = NULL, tau = 0.01, tol = 0.0001, rank = floor(dim(x)[1]/4), delta = 40, cross = 0, fit = TRUE, ..., subset, na.action = na.omit)

## S3 method for class 'kernelMatrix': lssvm(x, y, type = NULL, tau = 0.01, tol = 0.0001, rank = floor(dim(x)[1]/3), delta = 40, cross = 0, fit = TRUE, ...)

## S3 method for class 'list': lssvm(x, y, scaled = TRUE, kernel = "stringdot", kpar = list(length=4, lambda = 0.5), type = NULL, tau = 0.01, reduced = TRUE, tol = 0.0001, rank = floor(dim(x)[1]/3), delta = 40, cross = 0, fit = TRUE, ..., subset)

Arguments

x
a symbolic description of the model to be fit, a matrix or vector containing the training data when a formula interface is not used or a kernelMatrix or a list of character vectors.
data
an optional data frame containing the variables in the model. By default the variables are taken from the environment which `lssvm' is called from.
y
a response vector with one label for each row/component of x. Can be either a factor (for classification tasks) or a numeric vector (for classification or regression - currently nor suported -).
scaled
A logical vector indicating the variables to be scaled. If scaled is of length 1, the value is recycled as many times as needed and all non-binary variables are scaled. Per default, data are scaled internally to zero mean and unit
type
Type of problem. Either "classification" or "regression". Depending on whether y is a factor or not, the default setting for type is "classification" or "regression" respectively, but can be overwritten by setting an
kernel
the kernel function used in training and predicting. This parameter can be set to any function, of class kernel, which computes a dot product between two vector arguments. kernlab provides the most popular kernel functions which can be used by
kpar
the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. For valid parameters for existing kernels are :
  • sigmainverse kernel width for the Radial B
tau
the regularization parameter (default 0.01)
reduced
if set to FALSE the full linear problem of the lssvm is solved, when TRUE a reduced method using csi is used.
rank
the maximal rank of the decomposed kernel matrix, see csi
delta
number of columns of cholesky performed in advance, see csi (default 40)
tol
tolerance of termination criterion for the csi function, lower tolerance leads to more preciese approximation but may increase the training time and the decomposed matrix size (default: 0.0001)
fit
indicates whether the fitted values should be computed and included in the model or not (default: 'TRUE')
cross
if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the Mean Squared Error for regression
subset
An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.)
na.action
A function to specify the action to be taken if NAs are found. The default action is na.omit, which leads to rejection of cases with missing values on any required variable. An alternative is na.fail, whi
...
additional parameters

Value

  • An S4 object of class "lssvm" containing the fitted model, Accessor functions can be used to access the slots of the object (see examples) which include:
  • alphathe parameters of the "lssvm"
  • coefthe model coefficients (identical to alpha)
  • bthe model offset.
  • xmatrixthe training data used by the model

Details

Least Squares Support Vector Machines are reformulation to the standard SVMs that lead to solving linear KKT systems. The algorithm is based on the minimization of a classical penalized least-squares cost function. The current implementation approximates the kernel matrix by an incomplete Cholesky factorization optained by the csi function, thus the solution is an approximation to the exact solution of the lssvm optimization problem. The quality of the solution depends on the approximation and can be influenced by the "rank" , "delta", and "tol" parameters.

References

J. A. K. Suykens and J. Vandewalle Least Squares Support Vector Machine Classifiers Neural Processing Letters vol. 9, issue 3, June 1999 [object Object],[object Object],[object Object]

ksvm, gausspr, csi

## simple example data(iris)

lir <- lssvm(Species~.,data=iris)

lir

lirr <- lssvm(Species~.,data= iris, reduced = FALSE)

lirr

## Using the kernelMatrix interface

iris <- unique(iris)

rbf <- rbfdot(0.5)

k <- kernelMatrix(rbf, as.matrix(iris[,-5]))

klir <- lssvm(k, iris[, 5])

klir

pre <- predict(klir, k) classif nonlinear methods