Learn R Programming

mldr (version 0.2.82)

mldr_evaluate: Evaluates the predictions made by a multilabel classifier

Description

Taking as input an mldr object and a matrix with the predictions given by a classifier, this function evaluates the classifier performance through several multilabel metrics.

Usage

mldr_evaluate(mldr, predictions, threshold = 0.5)

Arguments

mldr
Object of mldr type containing the instances to evaluate
predictions
Matrix with the labels predicted for each instance in the mldr parameter. Each element should be a value into [0,1] range
threshold
Threshold to use to generate bipartition of labels. By default the value 0.5 is used

Value

  • A list with multilabel predictive performance measures. The items in the list will be
    • Accuracy: Example and bipartition based accuracy (averaged by instance)
    • AUC: Example and binary partition Area Under the Curve ROC (averaged by instance)
    • AveragePrecision: Example and ranking based average precision (how many steps have to be made in the ranking to reach a certain relevant label, averaged by instance)
    • Coverage: Example and ranking based coverage (how many steps have to be made in the ranking to cover all the relevant labels, averaged by instance)
    • FMeasure: Example and binary partition F_1 measure (harmonic mean between precision and recall, averaged by instance)
    • HammingLoss: Example and binary partition Hamming Loss (simmetric difference between sets of labels, averaged by instance)
    • MacroAUC: Label and ranking based Area Under the Curve ROC (macro-averaged by label)
    • MacroFMeasure: Label and bipartition based F_1 measure (harmonic mean between precision and recall, macro-averaged by label)
    • MacroPrecision: Label and bipartition based precision (macro-averaged by label)
    • MacroRecall: Label and bipartition based recall (macro-averaged by label)
    • MicroAUC: Label and ranking based Area Under the Curve ROC (micro-averaged)
    • MicroFMeasure: Label and bipartition based F_1 measure (micro-averaged)
    • MicroPrecision: Label and bipartition based precision (micro-averaged)
    • MicroRecall: Label and bipartition based recall (micro-averaged)
    • OneError: Example and ranking based one-error (how many times the top-ranked label is not a relevant label, averaged by instance)
    • Precision: Example and bipartition based precision (averaged by instance)
    • RankingLoss: Example and ranking based ranking-loss (how many times a non-relevant label is ranked above a relevant one, evaluated for all label pairs and averaged by instance)
    • Recall: Example and bipartition based recall (averaged by instance)
    • SubsetAccuracy: Example and bipartition based subset accuracy (strict equality between predicted and real labelset, averaged by instance)
    • ROC: Arocobject corresponding to theMicroAUCvalue. This object can be given as input toplotfor plotting the ROC curve
    The AUC, MacroAUC, MicroAUC and ROC members will be NULL if the pROC package is not installed.

See Also

mldr

Examples

Run this code
library(mldr)

# Get the true labels in emotions
predictions <- as.matrix(emotions$dataset[,emotions$labels$index])
# and introduce some noise (alternatively get the predictions from some classifier)
predictions[sample(1:593, 100),sample(1:6, 100, replace = TRUE)] <- sample(0:1, 100, replace = TRUE)
# then evaluate predictive performance
res <- mldr_evaluate(emotions, predictions)
str(res)
plot(res$ROC, main = "ROC curve for emotions")

Run the code above in your browser using DataLab