Learn R Programming

SLmetrics (version 0.3-4)

huberloss.numeric: Huber Loss

Description

A generic S3 function to compute the huber loss score for a regression model. This function dispatches to S3 methods in huberloss() and performs no input validation. If you supply NA values or vectors of unequal length (e.g. length(x) != length(y)), the underlying C++ code may trigger undefined behavior and crash your R session.

Defensive measures

Because huberloss() operates on raw pointers, pointer-level faults (e.g. from NA or mismatched length) occur before any R-level error handling. Wrapping calls in try() or tryCatch() will not prevent R-session crashes.

To guard against this, wrap huberloss() in a "safe" validator that checks for NA values and matching length, for example:

safe_huberloss <- function(x, y, ...) {
  stopifnot(
    !anyNA(x), !anyNA(y),
    length(x) == length(y)
  )
  huberloss(x, y, ...)
}

Apply the same pattern to any custom metric functions to ensure input sanity before calling the underlying C++ code.

Usage

# S3 method for numeric
huberloss(actual, predicted, delta = 1, ...)

Value

A <double> value

Arguments

actual, predicted

A pair of <double> vectors of length \(n\).

delta

A <double>-vector of length \(1\) (default: \(1\)). The threshold value for switch between functions (see calculation).

...

Arguments passed into other methods

References

James, Gareth, et al. An introduction to statistical learning. Vol. 112. No. 1. New York: springer, 2013.

Hastie, Trevor. "The elements of statistical learning: data mining, inference, and prediction." (2009).

Virtanen, Pauli, et al. "SciPy 1.0: fundamental algorithms for scientific computing in Python." Nature methods 17.3 (2020): 261-272.

Pedregosa, Fabian, et al. "Scikit-learn: Machine learning in Python." the Journal of machine Learning research 12 (2011): 2825-2830.

See Also

Other Regression: ccc(), deviance.gamma(), deviance.poisson(), deviance.tweedie(), gmse(), maape(), mae(), mape(), mpe(), mse(), pinball(), rae(), rmse(), rmsle(), rrmse(), rrse(), rsq(), smape()

Other Supervised Learning: accuracy(), auc.pr.curve(), auc.roc.curve(), baccuracy(), brier.score(), ccc(), ckappa(), cmatrix(), cross.entropy(), deviance.gamma(), deviance.poisson(), deviance.tweedie(), dor(), fbeta(), fdr(), fer(), fmi(), fpr(), gmse(), hammingloss(), jaccard(), logloss(), maape(), mae(), mape(), mcc(), mpe(), mse(), nlr(), npv(), pinball(), plr(), pr.curve(), precision(), rae(), recall(), relative.entropy(), rmse(), rmsle(), roc.curve(), rrmse(), rrse(), rsq(), shannon.entropy(), smape(), specificity(), zerooneloss()

Examples

Run this code
## Generate actual
## and predicted values
actual_values    <- c(1.3, 0.4, 1.2, 1.4, 1.9, 1.0, 1.2)
predicted_values <- c(0.7, 0.5, 1.1, 1.2, 1.8, 1.1, 0.2)

## Evaluate performance
SLmetrics::huberloss(
   actual    = actual_values, 
   predicted = predicted_values
)

Run the code above in your browser using DataLab