A generic S3 function to compute the false omission rate score for a classification model. This function dispatches to S3 methods in fer()
and performs no input validation. If you supply NA values or vectors of unequal length (e.g. length(x) != length(y)
), the underlying C++
code may trigger undefined behavior and crash your R
session.
Because fer()
operates on raw pointers, pointer-level faults (e.g. from NA or mismatched length) occur before any R
-level error handling. Wrapping calls in try()
or tryCatch()
will not prevent R
-session crashes.
To guard against this, wrap fer()
in a "safe" validator that checks for NA values and matching length, for example:
safe_fer <- function(x, y, ...) {
stopifnot(
!anyNA(x), !anyNA(y),
length(x) == length(y)
)
fer(x, y, ...)
}
Apply the same pattern to any custom metric functions to ensure input sanity before calling the underlying C++
code.
For multiple performance evaluations of a classification model, first compute the confusion matrix once via cmatrix()
. All other performance metrics can then be derived from this one object via S3 dispatching:
## compute confusion matrix
confusion_matrix <- cmatrix(actual, predicted)## evaluate false omission rate
## via S3 dispatching
fer(confusion_matrix)
## additional performance metrics
## below
The fer.factor()
method calls cmatrix()
internally, so explicitly invoking fer.cmatrix()
yourself avoids duplicate computation, yielding significant speed and memory effciency gains when you need multiple evaluation metrics.
## Generic S3 method
## for False Omission Rate
fer(...)## Generic S3 method
## for weighted False Omission Rate
weighted.fer(...)
If estimator
is given as
Arguments passed on to fer.factor
, weighted.fer.factor
, fer.cmatrix
estimator
na.rm
A <logical> value of length \(1\) (default: TRUE). If TRUE, NA values are removed from the computation.
This argument is only relevant when micro != NULL
.
When na.rm = TRUE
, the computation corresponds to sum(c(1, 2, NA), na.rm = TRUE) / length(na.omit(c(1, 2, NA)))
.
When na.rm = FALSE
, the computation corresponds to sum(c(1, 2, NA), na.rm = TRUE) / length(c(1, 2, NA))
.
w
A <double> vector of sample weights.
x
A confusion matrix created cmatrix()
.
James, Gareth, et al. An introduction to statistical learning. Vol. 112. No. 1. New York: springer, 2013.
Hastie, Trevor. "The elements of statistical learning: data mining, inference, and prediction." (2009).
Pedregosa, Fabian, et al. "Scikit-learn: Machine learning in Python." the Journal of machine Learning research 12 (2011): 2825-2830.
Other Classification:
accuracy()
,
auc.pr.curve()
,
auc.roc.curve()
,
baccuracy()
,
brier.score()
,
ckappa()
,
cmatrix()
,
cross.entropy()
,
dor()
,
fbeta()
,
fdr()
,
fmi()
,
fpr()
,
hammingloss()
,
jaccard()
,
logloss()
,
mcc()
,
nlr()
,
npv()
,
plr()
,
pr.curve()
,
precision()
,
recall()
,
relative.entropy()
,
roc.curve()
,
shannon.entropy()
,
specificity()
,
zerooneloss()
Other Supervised Learning:
accuracy()
,
auc.pr.curve()
,
auc.roc.curve()
,
baccuracy()
,
brier.score()
,
ccc()
,
ckappa()
,
cmatrix()
,
cross.entropy()
,
deviance.gamma()
,
deviance.poisson()
,
deviance.tweedie()
,
dor()
,
fbeta()
,
fdr()
,
fmi()
,
fpr()
,
gmse()
,
hammingloss()
,
huberloss()
,
jaccard()
,
logloss()
,
maape()
,
mae()
,
mape()
,
mcc()
,
mpe()
,
mse()
,
nlr()
,
npv()
,
pinball()
,
plr()
,
pr.curve()
,
precision()
,
rae()
,
recall()
,
relative.entropy()
,
rmse()
,
rmsle()
,
roc.curve()
,
rrmse()
,
rrse()
,
rsq()
,
shannon.entropy()
,
smape()
,
specificity()
,
zerooneloss()
## Classes and
## seed
set.seed(1903)
classes <- c("Kebab", "Falafel")
## Generate actual
## and predicted classes
actual_classes <- factor(
x = sample(x = classes, size = 1e3, replace = TRUE),
levels = c("Kebab", "Falafel")
)
predicted_classes <- factor(
x = sample(x = classes, size = 1e3, replace = TRUE),
levels = c("Kebab", "Falafel")
)
## Evaluate performance
SLmetrics::fer(
actual = actual_classes,
predicted = predicted_classes
)
Run the code above in your browser using DataLab