PredPsych (version 0.3)

overallConfusionMetrics: Confusion Matrix metrics for Cross-validation

Description

A simple function to generate confusion matrix metrics for cross-validated Analyses

Usage

overallConfusionMetrics(confusionMat)

Arguments

confusionMat

(confusion matrix or a list of it) Input confusion Matrix generated by function confusionMatrix from caret library

Value

A list with metrics as generated by confusionMatrix function in caret library.

Details

A function to output confusion matrices and related metrics of sensitivity, specificity, precision, recall and other metrics for cross-validated analyses. There are multiple documented ways of calculating confusion matrix metrics for cross-validation (see Forman and Scholz 2012 for details on F1 score). The current procedure generates a final bigger confusion matrix and calculates the measures of sensitivity, specificity etc. on this matrix (instead of averaging sensitivities, specificities in each fold).

The intuition from (Kelleher, Namee and D'Arcy 2015) is:

"When we have a small dataset (introducing the possibility of a lucky split) measuring aggregate performance using a set of models gives a better estimate of post-deployment performance than measuring performance using a single model."

In addition, (Forman and Scholz 2010) using simulation studies show that F1 values calculated this way are less biased.

References

Kelleher, J. D., Namee, B. Mac & D'Arcy, A. Fundamentals of Machine Learning for Predictive Data Analytics. (The MIT Press, 2015). Section 8.4.1.2 Elkan, C. Evaluating Classifiers. (2012).https://pdfs.semanticscholar.org/2bdc/61752a02783aa0e69e92fe6f9b449916a095.pdf pp. 4

Forman, G. & Scholz, M. Apples-to-apples in cross-validation studies. ACM SIGKDD Explor. Newsl. 12, 49 (2010).

Examples

Run this code
# NOT RUN {
# Result from a confusion matrix 
confusionMat <- list(table = matrix(c(110,29,80,531),ncol = 2,
dimnames = list(Prediction = c(1,2),Reference = c(1,2))))
overallConfusionMetrics(confusionMat)

# Output:
#
# Confusion Matrix and Statistics
#           Reference
# Predicted   1   2
#          1 110  80
#          2  29 531
# Accuracy : 0.8547          
# 95% CI : (0.8274, 0.8791)
# No Information Rate : 0.8147          
# P-Value [Acc > NIR] : 0.002214        
# 
# Kappa : 0.5785          
# Mcnemar's Test P-Value : 1.675e-06       
# 
# Sensitivity : 0.7914          
# Specificity : 0.8691          
# Pos Pred Value : 0.5789          
# Neg Pred Value : 0.9482          
# Prevalence : 0.1853          
# Detection Rate : 0.1467          
# Detection Prevalence : 0.2533          
# Balanced Accuracy : 0.8302          
# 
# 'Positive' Class : 1         

# Alternative (realistic) examples
Results <- classifyFun(Data = KinData,classCol = 1,
selectedCols = c(1,2,12,22,32,42,52,62,72,82,92,102,112),cvType = "folds",
extendedResults = TRUE)

overallConfusionMetrics(Results$ConfMatrix)



# }

Run the code above in your browser using DataCamp Workspace