# caStats

0th

Percentile

##### Classification Accuracy Statistics.

Provides a set of statistics often used for conveying information regarding the certainty of classifications based on tests.

##### Usage
caStats(tp, tn, fp, fn)
##### Arguments
tp

The frequency or rate of true-positive classifications.

tn

The frequency or rate of true-negative classifications.

fp

The frequency or rate of false-positive classifications.

fn

The frequency or rate of false-negative classifications.

##### Value

A list of diagnostic performance statistics based on true/false positive/negative statistics. Specifically, the sensitivity, specificity, positive likelihood ratio (LR.pos), negative likelihood ratio (LR.neg), positive predictive value (PPV), negative predictive value (NPV), Youden's J. (Youden.J), and Accuracy.

##### References

Glas et al. (2003). The Diagnostic Odds Ratio: A Single Indicator of Test Performance, Journal of Clinical Epidemiology, 1129-1135, 56(11). doi: 10.1016/S0895-4356(03)00177-X

• caStats
##### Examples
# NOT RUN {
# Generate some fictional data. Say, 100 individuals take a test with a
# maximum score of 100 and a minimum score of 0.
set.seed(1234)
testdata <- rbinom(100, 100, rBeta.4P(100, .25, .75, 5, 3))
hist(testdata, xlim = c(0, 100))

# Suppose the cutoff value for attaining a pass is 50 items correct, and
# that the reliability of this test was estimated to 0.7. First, compute the
# estimated confusion matrix using LL.CA():
cmat <- LL.CA(x = testdata, reliability = .7, cut = 50, min = 0,
max = 100)\$confusionmatrix

# To estimate and retrieve diagnostic performance statistics using caStats(),
# feed it the appropriate entries of the confusion matrix.
caStats(tp = cmat["True", "Fail"], tn = cmat["True", "Pass"],
fp = cmat["False", "Fail"], fn = cmat["False", "Pass"])
# }

Documentation reproduced from package betafunctions, version 1.2.2, License: CC0

### Community examples

Looks like there are no examples yet.