mlr3 (version 0.1.4)

PredictionClassif: Prediction Object for Classification

Description

This object wraps the predictions returned by a learner of class LearnerClassif, i.e. the predicted response and class probabilities.

If the response is not provided during construction, but class probabilities are, the response is calculated from the probabilities: the class label with the highest probability is chosen. In case of ties, a label is selected randomly.

Arguments

Format

R6::R6Class object inheriting from Prediction.

Construction

p = PredictionClassif$new(task = NULL, row_ids = task$row_ids, truth = task$truth(), response = NULL, prob = NULL)
  • task :: TaskClassif Task, used to extract defaults for row_ids and truth.

  • row_ids :: (integer() | character()) Row ids of the observations in the test set.

  • truth :: factor() True (observed) labels. See the note on manual construction.

  • response :: (character() | factor()) Vector of predicted class labels. One element for each observation in the test set. Character vectors are automatically converted to factors. See the note on manual construction.

  • prob :: matrix() Numeric matrix of posterior class probabilities with one column for each class and one row for each observation in the test set. Columns must be named with class labels, row names are automatically removed. If prob is provided, but response is not, the class labels are calculated from the probabilities using mlr3misc::which_max() with ties_method set to "random".

Fields

All fields from Prediction, and additionally:

  • response :: factor() Access to the stored predicted class labels.

  • prob :: matrix() Access to the stored probabilities.

  • confusion :: matrix() Confusion matrix resulting from the comparison of truth and response. Truth is in columns, predicted response is in rows.

The field task_type is set to "classif".

Methods

  • set_threshold(th) numeric() -> self Sets the prediction response based on the provided threshold. See the section on thresholding for more information.

Thresholding

If probabilities are stored, it is possible to change the threshold which determines the predicted class label. Usually, the label of the class with the highest predicted probability is selected. For binary classification problems, such an threshold defaults to 0.5. For cost-sensitive or imbalanced classification problems, manually adjusting the threshold can increase the predictive performance.

  • For binary problems only a single threshold value can be set. If the probability exceeds the threshold, the positive class is predicted. If the probability equals the threshold, the label is selected randomly.

  • For binary and multi-class problems, a named numeric vector of thresholds can be set. The length and names must correspond to the number of classes and class names, respectively. To determine the class label, the probabilities are divided by the threshold. This results in a ratio > 1 if the probability exceeds the threshold, and a ratio < 1 otherwise. Note that it is possible that either none or multiple ratios are greater than 1 at the same time. Anyway, the class label with maximum ratio is selected. In case of ties in the ratio, one of the tied class labels is selected randomly.

See Also

Other Prediction: PredictionRegr, Prediction

Examples

Run this code
# NOT RUN {
task = tsk("iris")
learner = lrn("classif.rpart", predict_type = "prob")
learner$train(task)
p = learner$predict(task)
p$predict_types
head(as.data.table(p))

# confusion matrix
p$confusion

# change threshold
th = c(0.05, 0.9, 0.05)
names(th) = task$class_names

# new predictions
p$set_threshold(th)$response
p$score(measures = msr("classif.ce"))
# }

Run the code above in your browser using DataCamp Workspace