Function to compute the cross-validation error.
.BT_cv_errors(BT_cv_fit, cv.folds, folds)Vector containing the cross-validation errors w.r.t. the boosting iteration.
a BTCVFit object.
a numeric corresponding to the number of folds.
a numerical vector containing the different folds.id. Note that if the latter was not defined by the user, those are randomly generated based on the cv.folds input.
Gireg Willame gireg.willame@gmail.com
This package is inspired by the gbm3 package. For more details, see https://github.com/gbm-developers/gbm3/.
This function computes the global cross-validation error as a function of the boosting iteration. Differently said, this measure is obtained by computing the average of out-of-fold errors.
M. Denuit, D. Hainaut and J. Trufin (2019). Effective Statistical Learning Methods for Actuaries |: GLMs and Extensions, Springer Actuarial.
M. Denuit, D. Hainaut and J. Trufin (2019). Effective Statistical Learning Methods for Actuaries ||: Tree-Based Methods and Extensions, Springer Actuarial.
M. Denuit, D. Hainaut and J. Trufin (2019). Effective Statistical Learning Methods for Actuaries |||: Neural Networks and Extensions, Springer Actuarial.
M. Denuit, D. Hainaut and J. Trufin (2022). Response versus gradient boosting trees, GLMs and neural networks under Tweedie loss and log-link. Accepted for publication in Scandinavian Actuarial Journal.
M. Denuit, J. Huyghe and J. Trufin (2022). Boosting cost-complexity pruned trees on Tweedie responses: The ABT machine for insurance ratemaking. Paper submitted for publication.
M. Denuit, J. Trufin and T. Verdebout (2022). Boosting on the responses with Tweedie loss functions. Paper submitted for publication.
BT.