This routine provides binary classifiers that satisfy a
predefined error rate on one type of error and that
simlutaneously minimize the other type of error. For
convenience some points on the ROC curve around the
predefined error rate are returned.
nplNPL
performs Neyman-Pearson-Learning for classification.
nplSVM(x, y, ..., class = 1, constraint = 0.05,
constraint.factors = c(3, 4, 6, 9, 12)/6, do.select = TRUE)
either a formula or the features
either the data or the labels corresponding to the features x
.
It can be a character
in which case the data is loaded using liquidData
.
If it is of type liquidData
then after train
ing and select
ion
the model is test
ed using the testing data (y$test
).
configuration parameters, see Configuration. Can be threads=2, display=1, gpus=1,
etc.
is the normal class (the other class becomes the alarm class)
gives the false alarm rate which should be achieved
specifies the factors around constraint
if TRUE
also does the whole selection for this model
an object of type svm
. Depending on the usage this object
has also $train_errors
, $select_errors
, and $last_result
properties.
Please look at the demo-vignette (vignette('demo')
) for more examples.
The labels should only have value c(1,-1)
.
min_weight
, max_weight
, weight_steps
: you might have to define
which weighted classification problems will be considered.
The choice is usually a bit tricky. Good luck ...
# NOT RUN {
model <- nplSVM(Y ~ ., 'banana-bc', display=1)
## a worked example can be seen at
vignette("demo",package="liquidSVM")
# }
# NOT RUN {
# }
Run the code above in your browser using DataLab