mlr3 (version 0.1.4)

confusion_measures: Calculate Confusion Measures

Description

Based on a 2x2 confusion matrix for binary classification problems, allows to calculate various performance measures. Implemented are the following measures based on https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram:

  • "tp": True Positives.

  • "fn": False Negatives.

  • "fp": False Positives.

  • "tn": True Negatives.

  • "tpr": True Positive Rate.

  • "fnr": False Negative Rate.

  • "fpr": False Positive Rate.

  • "tnr": True Negative Rate.

  • "ppv": Positive Predictive Value.

  • "fdr": False Discovery Rate.

  • "for": False Omission Rate.

  • "npv": Negative Predictive Value.

  • "dor": Diagnostic Odds Ratio.

  • "f1": F1 Measure.

  • "precision": Alias for "ppv".

  • "recall": Alias for "tpr".

  • "sensitivity": Alias for "tpr".

  • "specificity": Alias for "tnr".

If the denominator is 0, the returned score is NA.

Usage

confusion_measures(m, type = NULL)

Arguments

m

:: matrix() Confusion matrix, e.g. as returned by field confusion of PredictionClassif. Truth is in columns, predicted response is in rows.

type

:: character() Selects the measure to use. See description for possible values.

Value

(named numeric()) of confusion measures.

Examples

Run this code
# NOT RUN {
task = tsk("german_credit")
learner = lrn("classif.rpart")
p = learner$train(task)$predict(task)
round(confusion_measures(p$confusion), 2)
# }

Run the code above in your browser using DataLab