powered by
Calculate a set of roc performance measures based on the confusion matrix.
tpr True positive rate (Sensitivity, Recall)
tpr
fpr False positive rate (Fall-out)
fpr
fnr False negative rate (Miss rate)
fnr
tnr True negative rate (Specificity)
tnr
ppv Positive predictive value (Precision)
ppv
fomr False omission rate
fomr
lrp Positive likelihood ratio (LR+)
lrp
fdr False discovery rate
fdr
npv Negative predictive value
npv
acc Accuracy
acc
lrm Negative likelihood ratio (LR-)
lrm
dor Diagnostic odds ratio
dor
score_roc_measures(pred)
list()
A list containing two elements confusion_matrix which is the 2 times 2 confusion matrix of absolute frequencies and measures, a list of the above mentioned measures.
confusion_matrix
measures
(PredictionClassif) The prediction object.
learner = lrn("classif.rpart", predict_type = "prob") splits = partition(task = tsk("pima"), ratio = 0.7) task = tsk("pima") learner$train(task) pred = learner$predict(task) score_roc_measures(pred)
Run the code above in your browser using DataLab