mlr3 (version 0.1.0-9000)

MeasureClassifConfusion: Binary Classification Measures Derived from a Confusion Matrix

Description

Based on a confusion matrix for binary classification problems, allows to calculate various performance measures. Implemented are the following measures based on https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram:

  • "tp": True Positives.

  • "fn": False Negatives.

  • "fp": False Positives.

  • "tn": True Negatives.

  • "tpr": True Positive Rate.

  • "fnr": False Negative Rate.

  • "fpr": False Positive Rate.

  • "tnr": True Negative Rate.

  • "ppv": Positive Predictive Value.

  • "fdr": False Discovery Rate.

  • "for": False Omission Rate.

  • "npv": Negative Predictive Value.

  • "precision": Alias for "ppv".

  • "recall": Alias for "tpr".

  • "sensitivity": Alias for "tpr".

  • "specificity": Alias for "tnr".

If the denominator is 0, the score is returned as NA.

Usage

MeasureClassifConfusion

confusion_measures(m, type = NULL)

Arguments

m

:: matrix() Confusion matrix, e.g. as returned by field confusion of PredictionClassif. Truth is in columns, predicted response is in rows.

type

:: character() Selects the measure to use. See description.

Format

R6::R6Class() inheriting from MeasureClassif.

Examples

Run this code
# NOT RUN {
task = mlr_tasks$get("german_credit")
learner = mlr_learners$get("classif.rpart")
p = learner$train(task)$predict(task)
p$confusion
round(confusion_measures(p$confusion), 2)
# }

Run the code above in your browser using DataLab