A generic S3 function to compute the tweedie deviance score for a regression model. This function dispatches to S3 methods in deviance.tweedie() and performs no input validation. If you supply NA values or vectors of unequal length (e.g. length(x) != length(y)), the underlying C++ code may trigger undefined behavior and crash your R session.
Because deviance.tweedie() operates on raw pointers, pointer-level faults (e.g. from NA or mismatched length) occur before any R-level error handling. Wrapping calls in try() or tryCatch() will not prevent R-session crashes.
To guard against this, wrap deviance.tweedie() in a "safe" validator that checks for NA values and matching length, for example:
safe_deviance.tweedie <- function(x, y, ...) {
stopifnot(
!anyNA(x), !anyNA(y),
length(x) == length(y)
)
deviance.tweedie(x, y, ...)
}
Apply the same pattern to any custom metric functions to ensure input sanity before calling the underlying C++ code.
# S3 method for tweedie.numeric
deviance(actual, predicted, power = 2, ...)A <double> value
A <double> value, default = 2. Tweedie power parameter. Either power <= 0 or power >= 1.
The higher \(power\), the less weight is given to extreme deviations between actual and predicted values.
power < 0: Extreme stable distribution. Requires: predicted > 0.
power = 0: Normal distribution, output corresponds to mse(), actual and predicted can be any real numbers.
power = 1: Poisson distribution (deviance.poisson()). Requires: actual >= 0 and predicted > 0.
1 < power < 2: Compound Poisson distribution. Requires: actual >= 0 and predicted > 0.
power = 2: Gamma distribution (deviance.gamma()). Requires: actual > 0 and predicted > 0.
power = 3: Inverse Gaussian distribution. Requires: actual > 0 and predicted > 0.
otherwise: Positive stable distribution. Requires: actual > 0 and predicted > 0.
Arguments passed into other methods
James, Gareth, et al. An introduction to statistical learning. Vol. 112. No. 1. New York: springer, 2013.
Hastie, Trevor. "The elements of statistical learning: data mining, inference, and prediction." (2009).
Virtanen, Pauli, et al. "SciPy 1.0: fundamental algorithms for scientific computing in Python." Nature methods 17.3 (2020): 261-272.
Pedregosa, Fabian, et al. "Scikit-learn: Machine learning in Python." the Journal of machine Learning research 12 (2011): 2825-2830.
Other Regression:
ccc(),
deviance.gamma(),
deviance.poisson(),
gmse(),
huberloss(),
maape(),
mae(),
mape(),
mpe(),
mse(),
pinball(),
rae(),
rmse(),
rmsle(),
rrmse(),
rrse(),
rsq(),
smape()
Other Supervised Learning:
accuracy(),
auc.pr.curve(),
auc.roc.curve(),
baccuracy(),
brier.score(),
ccc(),
ckappa(),
cmatrix(),
cross.entropy(),
deviance.gamma(),
deviance.poisson(),
dor(),
fbeta(),
fdr(),
fer(),
fmi(),
fpr(),
gmse(),
hammingloss(),
huberloss(),
jaccard(),
logloss(),
maape(),
mae(),
mape(),
mcc(),
mpe(),
mse(),
nlr(),
npv(),
pinball(),
plr(),
pr.curve(),
precision(),
rae(),
recall(),
relative.entropy(),
rmse(),
rmsle(),
roc.curve(),
rrmse(),
rrse(),
rsq(),
shannon.entropy(),
smape(),
specificity(),
zerooneloss()
## Generate actual
## and predicted values
actual_values <- c(1.3, 0.4, 1.2, 1.4, 1.9, 1.0, 1.2)
predicted_values <- c(0.7, 0.5, 1.1, 1.2, 1.8, 1.1, 0.2)
## Evaluate performance
SLmetrics::deviance.tweedie(
actual = actual_values,
predicted = predicted_values
)
Run the code above in your browser using DataLab