Learn R Programming

rfUtilities (version 2.1-4)

accuracy: Accuracy

Description

Classification accuracy measures for pcc, kappa, users accuracy, producers accuracy

Usage

accuracy(x, y)

Arguments

x

vector of predicted data or table/matrix contingency table

y

vector of observed data, if x is not table/matrix contingency table

Value

A list class object with the following components:

  • PCC percent correctly classified (accuracy)

  • auc Area Under the ROC Curve

  • users.accuracy The users accuracy

  • producers.accuracy The producers accuracy

  • kappa Cohen's Kappa (chance corrected accuracy)

  • true.skill Hanssen-Kuiper skill score (aka true score statistic)

  • sensitivity Sensitivity (aka, recall)

  • specificity Specificity

  • plr Positive Likelihood Ratio

  • nlr Negative Likelihood Ratio

  • typeI.error Type I error (omission)

  • typeII.error Type II error (commission)

  • gini Gini entropy index

  • f.score F-score

  • gain Information gain (aka precision)

  • mcc Matthew's correlation

  • confusion A confusion matrix

References

Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20 (1):37-46 Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin 70 (4):213-220 Powers, D.M.W., (2011). Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation. Journal of Machine Learning Technologies 2(1):37-63.

Examples

Run this code
# NOT RUN {
 # Two classes (vector)
 observed <- sample(c(rep("Pres",50),rep("Abs",50)), 100, replace=TRUE )
 accuracy(observed[sample(1:length(observed))], observed)

 # Two classes (contingency table)
accuracy(cbind(c(15,11), c(2,123)))

 # Multiple classes
 accuracy(iris[sample(1:150),]$Species, iris$Species)

# }

Run the code above in your browser using DataLab