Based on a confusion matrix for binary classification problems, allows to calculate various performance measures. Implemented are the following measures based on https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram:
"tp": True Positives.
"tp"
"fn": False Negatives.
"fn"
"fp": False Positives.
"fp"
"tn": True Negatives.
"tn"
"tpr": True Positive Rate.
"tpr"
"fnr": False Negative Rate.
"fnr"
"fpr": False Positive Rate.
"fpr"
"tnr": True Negative Rate.
"tnr"
"ppv": Positive Predictive Value.
"ppv"
"fdr": False Discovery Rate.
"fdr"
"for": False Omission Rate.
"for"
"npv": Negative Predictive Value.
"npv"
"precision": Alias for "ppv".
"precision"
"recall": Alias for "tpr".
"recall"
"sensitivity": Alias for "tpr".
"sensitivity"
"specificity": Alias for "tnr".
"specificity"
If the denominator is 0, the score is returned as NA.
NA
MeasureClassifConfusionconfusion_measures(m, type = NULL)
confusion_measures(m, type = NULL)
(matrix()) Confusion matrix, e.g. as returned by field confusion of PredictionClassif. Truth is in columns, predicted response is in rows.
matrix()
confusion
(character()) Selects the measure to use. See description.
character()
R6::R6Class() inheriting from MeasureClassif.
R6::R6Class()
# NOT RUN { task = mlr_tasks$get("wine") e = Experiment$new("wine", "classif.rpart")$train()$predict() m = e$prediction$confusion confusion_measures(m, type = c("precision", "recall")) # }
Run the code above in your browser using DataLab