In liquidSVM an application cycle is divided into a training phase, in which various SVM
models are created and validated, a selection phase, in which the SVM models that best
satisfy a certain criterion are selected, and a test phase, in which the selected models are
applied to test data. These three phases are based upon several components, which can be
freely combined using different components: solvers, hyper-parameter selection, working sets.
All of these can be configured (see Configuration) a
For instance multi-class classification with \(k\) labels has to be delegated to several binary classifications
called tasks either using all-vs-all (\(k(k-1)/2\) tasks on the corresponding subsets) or
one-vs-all (\(k\) tasks on the full data set).
Every task can be split into cells in order to handle larger data sets (for example \(>10000\) samples).
Now for every task and every cell, several folds are created to enable cross-validated hyper-parameter selection.
The following learning scenarios can be used out of the box:
mcSVM
binary and multi-class classification
lsSVM
least squares regression
nplSVM
Neyman-Pearson learning to classify with a specified rate on one type of error
rocSVM
Receivert Operating Characteristic (ROC) curve to solve multiple weighted binary classification problems.
qtSVM
quantile regression
exSVM
expectile regression
bsSVM
bootstrapping
To calculate kernel matrices as used by the SVM we also provide for convenience the function
kern
.
liquidSVM can benefit heavily from native compilation, hence we recommend to (re-)install it
using the information provided in the installation section
of the documentation vignette.