Models can easily be accessed via getLearnerModel
.
Note that it does not make sense to set a threshold in the used base learner
when you predict probabilities.
On the other hand, it can make a lot of sense, to call setThreshold
on the MultilabelBinaryRelevanceWrapper
for each label indvidually;
Or to tune these thresholds with tuneThreshold
; especially when you face very
unabalanced class distributions for each binary label.
makeMultilabelBinaryRelevanceWrapper(learner)
Learner
| character(1)
]
The learner.
If you pass a string the learner will be created via makeLearner
.Learner
].
getMultilabelBinaryPerformances
,
makeMultilabelClassifierChainsWrapper
,
makeMultilabelDBRWrapper
,
makeMultilabelNestedStackingWrapper
,
makeMultilabelStackingWrapper
Other wrapper: makeBaggingWrapper
,
makeConstantClassWrapper
,
makeCostSensClassifWrapper
,
makeCostSensRegrWrapper
,
makeDownsampleWrapper
,
makeFeatSelWrapper
,
makeFilterWrapper
,
makeImputeWrapper
,
makeMulticlassWrapper
,
makeMultilabelClassifierChainsWrapper
,
makeMultilabelDBRWrapper
,
makeMultilabelNestedStackingWrapper
,
makeMultilabelStackingWrapper
,
makeOverBaggingWrapper
,
makePreprocWrapperCaret
,
makePreprocWrapper
,
makeRemoveConstantFeaturesWrapper
,
makeSMOTEWrapper
,
makeTuneWrapper
,
makeUndersampleWrapper
,
makeWeightedClassesWrapper
d = getTaskData(yeast.task)
# drop some labels so example runs faster
d = d[seq(1, nrow(d), by = 20), c(1:2, 15:17)]
task = makeMultilabelTask(data = d, target = c("label1", "label2"))
lrn = makeLearner("classif.rpart")
lrn = makeMultilabelBinaryRelevanceWrapper(lrn)
lrn = setPredictType(lrn, "prob")
# train, predict and evaluate
mod = train(lrn, task)
pred = predict(mod, task)
performance(pred, measure = list(multilabel.hamloss, multilabel.subset01, multilabel.f1))
# the next call basically has the same structure for any multilabel meta wrapper
getMultilabelBinaryPerformances(pred, measures = list(mmce, auc))
# above works also with predictions from resample!
Run the code above in your browser using DataLab