Last chance! 50% off unlimited learning
Sale ends in
These functions calculate the sensitivity, specificity or predictive values of a measurement system compared to a reference results (the truth or a gold standard). The measurement and "truth" data must have the same two possible outcomes and one of the outcomes must be thought of as a "positive" results or the "event".
sens(data, ...)# S3 method for data.frame
sens(data, truth, estimate, na.rm = TRUE, ...)
# S3 method for table
sens(data, ...)
# S3 method for matrix
sens(data, ...)
# S3 method for data.frame
spec(data, truth, estimate, na.rm = TRUE, ...)
ppv(data, ...)
# S3 method for table
ppv(data, prevalence = NULL, ...)
# S3 method for matrix
ppv(data, prevalence = NULL, ...)
npv(data, ...)
# S3 method for table
npv(data, prevalence = NULL, ...)
# S3 method for matrix
npv(data, prevalence = NULL, ...)
For the default functions, a factor containing the
discrete measurements. For the table
or matrix
functions, a table or matrix object, respectively, where the
true class results should be in the columns of the table.
Not currently used.
The column identifier for the true class results (that is a factor). This should an unquoted column name although this argument is passed by expression and support quasiquotation (you can unquote column names or column positions).
The column identifier for the predicted class
results (that is also factor). As with truth
this can be
specified different ways but the primary method is to use an
unquoted variable name.
A logical value indicating whether NA
values should be stripped before the computation proceeds
A numeric value for the rate of the "positive" class of the data.
A number between 0 and 1 (or NA).
The sensitivity is defined as the proportion of positive
results out of the number of samples which were actually
positive. When there are no positive results, sensitivity is not
defined and a value of NA
is returned. Similarly, when
there are no negative results, specificity is not defined and a
value of NA
is returned. Similar statements are true for
predictive values.
The positive predictive value is defined as the percent of predicted positives that are actually positive while the negative predictive value is defined as the percent of negative positives that are actually negative.
There is no common convention on which factor level should
automatically be considered the "event" or "positive" results.
In yardstick
, the default is to use the first level. To
change this, a global option called yardstick.event_first
is
set to TRUE
when the package is loaded. This can be changed
to FALSE
if the last level of the factor is considered the
level of interest.
Suppose a 2x2 table with notation
Reference | ||
Predicted | Event | No Event |
Event | A | B |
No Event | C | D |
The formulas used here are:
See the references for discussions of the statistics.
If more than one statistic is required, it is more
computationally efficient to create the confusion matrix using
conf_mat()
and applying the corresponding summary
method
(summary.conf_mat()
) to get the values at once.
Altman, D.G., Bland, J.M. (1994) ``Diagnostic tests 1: sensitivity and specificity,'' British Medical Journal, vol 308, 1552.
Altman, D.G., Bland, J.M. (1994) ``Diagnostic tests 2: predictive values,'' British Medical Journal, vol 309, 102.
# NOT RUN {
data("two_class_example")
# Given that a sample is Class 1,
# what is the probability that is predicted as Class 1?
sens(two_class_example, truth = truth, estimate = predicted)
# Given that a sample is predicted to be Class 1,
# what is the probability that it truly is Class 1?
ppv(two_class_example, truth = truth, estimate = predicted)
# But what if we think that Class 1 only occurs 40% of the time?
ppv(two_class_example, truth, predicted, prevalence = 0.40)
# }
Run the code above in your browser using DataLab