yardstick (version 0.0.1)

recall: Calculate recall, precision and F values

Description

These functions calculate the recall, precision or F values of a measurement system for finding/retrieving relevant documents compared to reference results (the truth regarding relevance). The measurement and "truth" data must have the same two possible outcomes and one of the outcomes must be thought of as a "relevant" results.

Usage

recall(data, ...)

# S3 method for table recall(data, ...)

# S3 method for data.frame recall(data, truth, estimate, na.rm = TRUE, ...)

precision(data, ...)

# S3 method for data.frame precision(data, truth, estimate, na.rm = TRUE, ...)

# S3 method for table precision(data, ...)

f_meas(data, ...)

# S3 method for default f_meas(data, truth, estimate, beta = 1, na.rm = TRUE, ...)

# S3 method for table f_meas(data, beta = 1, ...)

Arguments

data

For the default functions, a factor containing the discrete measurements. For the table or matrix functions, a table or matrix object, respectively, where the true class results should be in the columns of the table.

...

Not currently used.

truth

The column identifier for the true class results (that is a factor). This should an unquoted column name although this argument is passed by expression and support quasiquotation (you can unquote column names or column positions).

estimate

The column identifier for the predicted class results (that is also factor). As with truth this can be specified different ways but the primary method is to use an unquoted variable name.

na.rm

A logical value indicating whether NA values should be stripped before the computation proceeds

beta

A numeric value used to weight precision and recall. A value of 1 is traditionally used and corresponds to the harmonic mean of the two values but other values weight recall beta times more important than precision.

Details

The recall (aka specificity) is defined as the proportion of relevant results out of the number of samples which were actually relevant. When there are no relevant results, recall is not defined and a value of NA is returned.

The precision is percentage of predicted truly relevant results of the total number of predicted relevant results and characterizes the "purity in retrieval performance" (Buckland and Gey, 1994).

The measure "F" is a combination of precision and recall (see below).

There is no common convention on which factor level should automatically be considered the relevant result. In yardstick, the default is to use the first level. To change this, a global option called yardstick.event_first is set to TRUE when the package is loaded. This can be changed to FALSE if the last level of the factor is considered the level of interest.

Suppose a 2x2 table with notation

Reference
Predicted relevant Irrelevant
relevant A B
Irrelevant C D

The formulas used here are: $$recall = A/(A+C)$$ $$precision = A/(A+B)$$ $$F_i = (1+i^2)*prec*recall/((i^2 * precision)+recall)$$

See the references for discussions of the statistics.

If more than one statistic is required, it is more computationally efficient to create the confusion matrix using conf_mat() and applying the corresponding summary method (summary.conf_mat()) to get the values at once.

References

Buckland, M., & Gey, F. (1994). The relationship between Recall and Precision. Journal of the American Society for Information Science, 45(1), 12-19.

Powers, D. (2007). Evaluation: From Precision, Recall and F Factor to ROC, Informedness, Markedness and Correlation. Technical Report SIE-07-001, Flinders University

See Also

conf_mat(), summary.conf_mat(), sens(), mcc()

Examples

Run this code
# NOT RUN {
data("two_class_example")

# Different methods for calling the functions:
precision(two_class_example, truth = truth, estimate = predicted)

recall(two_class_example, truth = "truth", estimate = "predicted")

truth_var <- quote(truth)
f_meas(two_class_example, !! truth_var, predicted)
# }

Run the code above in your browser using DataLab