mlr3measures (version 0.5.0)

fpr: False Positive Rate

Description

Measure to compare true observed labels with predicted labels in binary classification tasks.

Usage

fpr(truth, response, positive, na_value = NaN, ...)

Value

Performance value as numeric(1).

Arguments

truth

(factor())
True (observed) labels. Must have the exactly same two levels and the same length as response.

response

(factor())
Predicted response labels. Must have the exactly same two levels and the same length as truth.

positive

(character(1))
Name of the positive class.

na_value

(numeric(1))
Value that should be returned if the measure is not defined for the input (as described in the note). Default is NaN.

...

(any)
Additional arguments. Currently ignored.

Meta Information

  • Type: "binary"

  • Range: \([0, 1]\)

  • Minimize: TRUE

  • Required prediction: response

Details

The False Positive Rate is defined as $$ \frac{\mathrm{FP}}{\mathrm{FP} + \mathrm{TN}}. $$ Also know as fall out or probability of false alarm.

This measure is undefined if FP + TN = 0.

References

https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram

See Also

Other Binary Classification Measures: auc(), bbrier(), dor(), fbeta(), fdr(), fnr(), fn(), fomr(), fp(), mcc(), npv(), ppv(), prauc(), tnr(), tn(), tpr(), tp()

Examples

Run this code
set.seed(1)
lvls = c("a", "b")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
response = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
fpr(truth, response, positive = "a")

Run the code above in your browser using DataLab