Measurements of categorical forecast accuracy have a long history
in weather forecasting. The standard approach involves making binary classifications
(detected/not-detected) of predicted and observed data and combining them in a
binary contingency table known as a confusion matrix.
- TP (true postive) 
- FP (false postive) (type I error) 
- FN (false negative) (type II error) 
- TN (true negative) 
- TPRate (true positive rate) = sensitivity = recall = TP / (TP + FN) 
- FPRate (false positive rate) = FP / (FP + TN) 
- FNRate (false negative rate) = FN / (TP + FN) 
- TNRate (true negative rate) = specificity = TN / (FP + TN) 
- accuracy = proportionCorrect = (TP + TN) / total 
- errorRate = 1 - accuracy = (FP + FN) / total 
- falseAlarmRatio = PPV (positive predictive value) = precision = TP / (TP + FP) 
- FDR (false discovery rate) = FP / (TP + FP) 
- NPV (negative predictive value) = TN / (TN + FN) 
- FOR (false omission rate) = FN / (TN + FN) 
- f1_score = (2 * TP) / (2 * TP + FP + FN) 
- detectionRate = TP / total 
- baseRate = detectionPrevalence = (TP + FN) / total 
- probForecastOccurance = prevalence = (TP + FP) / total 
- balancedAccuracy = (TPRate + TNRate) / 2 
- expectedAccuracy = (((TP + FP) * (TP + FN) / total) + ((FP + TN) * sum(FN + TN) / total )) / total 
- heidkeSkill = kappa = (accuracy - expectedAccuracy) / (1 - expectedAccuracy) 
- bias = (TP + FP) / (TP + FN) 
- hitRate = TP / (TP + FN) 
- falseAlarmRate = FP / (FP + TN) 
- pierceSkill = ((TP * TN) - (FP * FN)) / ((FP + TN) * (TP + FN)) 
- criticalSuccess = TP / (TP + FP + FN) 
- oddsRatioSkill = yulesQ = ((TP * TN) - (FP * FN)) / ((TP * TN) + (FP * FN))