Learn R Programming

mldr (version 0.4.3)

Averaged metrics: Multi-label averaged evaluation metrics

Description

Evaluation metrics based on simple metrics for the confusion matrix, averaged through several criteria.

Usage

accuracy(true_labels, predicted_labels, undefined_value = "diagnose")

precision(true_labels, predicted_labels, undefined_value = "diagnose")

micro_precision(true_labels, predicted_labels, ...)

macro_precision(true_labels, predicted_labels, undefined_value = "diagnose")

recall(true_labels, predicted_labels, undefined_value = "diagnose")

micro_recall(true_labels, predicted_labels, ...)

macro_recall(true_labels, predicted_labels, undefined_value = "diagnose")

fmeasure(true_labels, predicted_labels, undefined_value = "diagnose")

micro_fmeasure(true_labels, predicted_labels, ...)

macro_fmeasure(true_labels, predicted_labels, undefined_value = "diagnose")

Arguments

true_labels

Matrix of true labels, columns corresponding to labels and rows to instances.

predicted_labels

Matrix of predicted labels, columns corresponding to labels and rows to instances.

undefined_value

The value to be returned when a computation results in an undefined value due to a division by zero. See details.

...

Additional parameters for precision, recall and Fmeasure.

Value

Atomical numeric vector containing the resulting value in the range [0, 1].

Details

Available metrics in this category

  • accuracy: Bipartition based accuracy

  • fmeasure: Example and binary partition F_1 measure (harmonic mean between precision and recall, averaged by instance)

  • macro_fmeasure: Label and bipartition based F_1 measure (harmonic mean between precision and recall, macro-averaged by label)

  • macro_precision: Label and bipartition based precision (macro-averaged by label)

  • macro_recall: Label and bipartition based recall (macro-averaged by label)

  • micro_fmeasure: Label and bipartition based F_1 measure (micro-averaged)

  • micro_precision: Label and bipartition based precision (micro-averaged)

  • micro_recall: Label and bipartition based recall (micro-averaged)

  • precision: Example and bipartition based precision (averaged by instance)

  • recall: Example and bipartition based recall (averaged by instance)

Deciding a value when denominators are zero

Parameter undefined_value: The value to be returned when a computation results in an undefined value due to a division by zero. Can be a single value (e.g. NA, 0), a function with the following signature:

function(tp, fp, tn, fn)

or a string corresponding to one of the predefined strategies. These are:

  • "diagnose": This strategy performs the following decision:

    • Returns 1 if there are no true labels and none were predicted

    • Returns 0 otherwise

    This is the default strategy, and the one followed by MULAN.

  • "ignore": Occurrences of undefined values will be ignored when averaging (averages will be computed with potentially less values than instances/labels). Undefined values in micro-averaged metrics cannot be ignored (will return NA).

  • "na": Will return NA (with class numeric) and it will be propagated when averaging (averaged metrics will potentially return NA).

See Also

mldr_evaluate, mldr_to_labels

Other evaluation metrics: Basic metrics, Ranking-based metrics

Examples

Run this code
# NOT RUN {
true_labels <- matrix(c(
1,1,1,
0,0,0,
1,0,0,
1,1,1,
0,0,0,
1,0,0
), ncol = 3, byrow = TRUE)
predicted_labels <- matrix(c(
1,1,1,
0,0,0,
1,0,0,
1,1,0,
1,0,0,
0,1,0
), ncol = 3, byrow = TRUE)

precision(true_labels, predicted_labels, undefined_value = "diagnose")
macro_recall(true_labels, predicted_labels, undefined_value = 0)
macro_fmeasure(
  true_labels, predicted_labels,
  undefined_value = function(tp, fp, tn, fn) as.numeric(fp == 0 && fn == 0)
)
# }

Run the code above in your browser using DataLab