Learn R Programming

mldr (version 0.4.3)

mldr_evaluate: Evaluate predictions made by a multilabel classifier

Description

Taking as input an mldr object and a matrix with the predictions given by a classifier, this function evaluates the classifier performance through several multilabel metrics.

Usage

mldr_evaluate(mldr, predictions, threshold = 0.5)

Arguments

mldr

Object of "mldr" class containing the instances to evaluate

predictions

Matrix with the labels predicted for each instance in the mldr parameter. Each element should be a value into [0,1] range

threshold

Threshold to use to generate bipartition of labels. By default the value 0.5 is used

Value

A list with multilabel predictive performance measures. The items in the list will be

  • accuracy

  • example_auc

  • average_precision

  • coverage

  • fmeasure

  • hamming_loss

  • macro_auc

  • macro_fmeasure

  • macro_precision

  • macro_recall

  • micro_auc

  • micro_fmeasure

  • micro_precision

  • micro_recall

  • one_error

  • precision

  • ranking_loss

  • recall

  • subset_accuracy

  • roc

The roc element corresponds to a roc object associated to the MicroAUC value. This object can be given as input to plot for plotting the ROC curve The example_auc, macro_auc, micro_auc and roc members will be NULL if the pROC package is not installed.

See Also

mldr, Basic metrics, Averaged metrics, Ranking-based metrics, roc.mldr

Examples

Run this code
# NOT RUN {
library(mldr)

# Get the true labels in emotions
predictions <- as.matrix(emotions$dataset[, emotions$labels$index])
# and introduce some noise (alternatively get the predictions from some classifier)
noised_labels <- cbind(sample(1:593, 200, replace = TRUE), sample(1:6, 200, replace = TRUE))
predictions[noised_labels] <- sample(0:1, 100, replace = TRUE)
# then evaluate predictive performance
res <- mldr_evaluate(emotions, predictions)
str(res)
plot(res$roc, main = "ROC curve for emotions")
# }

Run the code above in your browser using DataLab