Learn R Programming

irboost (version 0.1-1.5)

irb.train: fit a robust predictive model with iteratively reweighted boosting algorithm

Description

Fit a predictive model with the iteratively reweighted convex optimization (IRCO) that minimizes the robust loss functions in the CC-family (concave-convex). The convex optimization is conducted by functional descent boosting algorithm in the R package xgboost. The iteratively reweighted boosting (IRBoost) algorithm reduces the weight of the observation that leads to a large loss; it also provides weights to help identify outliers. Applications include the robust generalized linear models and extensions, where the mean is related to the predictors by boosting, and robust accelerated failure time models. irb.train is an advanced interface for training an irboost model. The irboost function is a simpler wrapper for irb.train. See xgboost::xgb.train.

Usage

irb.train(
  params = list(),
  data,
  z_init = NULL,
  cfun = "ccave",
  s = 1,
  delta = 0.1,
  iter = 10,
  nrounds = 100,
  del = 1e-10,
  trace = FALSE,
  ...
)

Value

An object with S3 class xgb.train with the additional elments:

  • weight_update_log a matrix of nobs row by iter column of observation weights in each iteration of the IRCO algorithm

  • weight_update a vector of observation weights in the last IRCO iteration that produces the final model fit

  • loss_log sum of loss value of the composite function in each IRCO iteration. Note, cfun requires objective non-negative in some cases. Thus care must be taken. For instance, with objective="reg:gamma", the loss value is defined by gamma-nloglik - (1+log(min(y))), where y=label. The second term is introduced such that the loss value is non-negative. In fact, gamma-nloglik=y/ypre + log(ypre) in the xgboost::xgb.train, where ypre is the mean prediction value, can be negative. It can be derived that for fixed y, the minimum value of gamma-nloglik is achived at ypre=y, or 1+log(y). Thus, among all label values, the minimum of gamma-nloglik is 1+log(min(y)).

Arguments

params

the list of parameters, params is passed to function xgboost::xgb.train which requires the same argument. The list must include objective, a convex component in the CC-family, the second C, or convex down. It is the same as objective in the xgboost::xgb.train. The following objective functions are currently implemented:

  • reg:squarederror Regression with squared loss.

  • binary:logitraw logistic regression for binary classification, predict linear predictor, not probabilies.

  • binary:hinge hinge loss for binary classification. This makes predictions of -1 or 1, rather than producing probabilities.

  • multi:softprob softmax loss function for multiclass problems. The result contains predicted probabilities of each data point in each class, say p_k, k=0, ..., nclass-1. Note, label is coded as in [0, ..., nclass-1]. The loss function cross-entropy for the i-th observation is computed as -log(p_k) with k=lable_i, i=1, ..., n.

  • count:poisson: Poisson regression for count data, predict mean of poisson distribution.

  • reg:gamma: gamma regression with log-link, predict mean of gamma distribution. The implementation in xgboost::xgb.train takes a parameterization in the exponential family:
    xgboost/src/src/metric/elementwise_metric.cu.
    In particularly, there is only one parameter psi and set to 1. The implementation of the IRCO algorithm follows this parameterization. See Table 2.1, McCullagh and Nelder, Generalized linear models, Chapman & Hall, 1989, second edition.

  • reg:tweedie: Tweedie regression with log-link. See also
    tweedie_variance_power in range: (1,2). A value close to 2 is like a gamma distribution. A value close to 1 is like a Poisson distribution.

  • survival:aft: Accelerated failure time model for censored survival time data. irb.train invokes irb.train_aft.

data

training dataset. irb.train accepts only an xgboost::xgb.DMatrix as the input. irboost, in addition, also accepts matrix, dgCMatrix, or name of a local data file. See xgboost::xgb.train.

z_init

vector of nobs with initial convex component values, must be non-negative with default values = weights if data has provided, otherwise z_init = vector of 1s

cfun

concave component of CC-family, can be "hacve", "acave", "bcave", "ccave", "dcave", "ecave", "gcave", "hcave".
See Table 2 https://arxiv.org/pdf/2010.02848.pdf

s

tuning parameter of cfun. s > 0 and can be equal to 0 for cfun="tcave". If s is too close to 0 for cfun="acave", "bcave", "ccave", the calculated weights can become 0 for all observations, thus crash the program

delta

a small positive number provided by user only if cfun="gcave" and 0 < s <1

iter

number of iteration in the IRCO algorithm

nrounds

boosting iterations within each IRCO iteration

del

convergency criteria in the IRCO algorithm, no relation to delta

trace

if TRUE, fitting progress is reported

...

other arguments passing to xgb.train

Author

Zhu Wang
Maintainer: Zhu Wang zhuwang@gmail.com

References

Wang, Zhu (2021), Unified Robust Boosting, arXiv eprint, https://arxiv.org/abs/2101.07718

Examples

Run this code
# \donttest{
# logistic boosting
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')

dtrain <- with(agaricus.train, xgboost::xgb.DMatrix(data, label = label))
dtest <- with(agaricus.test, xgboost::xgb.DMatrix(data, label = label))
watchlist <- list(train = dtrain, eval = dtest)

# A simple irb.train example:
param <- list(max_depth = 2, eta = 1, nthread = 2, 
objective = "binary:logitraw", eval_metric = "auc")
bst <- xgboost::xgb.train(params=param, data=dtrain, nrounds = 2, 
                          watchlist=watchlist, verbose=2)
bst <- irb.train(params=param, data=dtrain, nrounds = 2)
summary(bst$weight_update)
# a bug in xgboost::xgb.train
#bst <- irb.train(params=param, data=dtrain, nrounds = 2, 
#                 watchlist=watchlist, trace=TRUE, verbose=2) 

# time-to-event analysis
X <- matrix(1:5, ncol=1)
# Associate ranged labels with the data matrix.
# This example shows each kind of censored labels.
# uncensored  right  left  interval
y_lower = c(10,  15, -Inf, 30, 100)
y_upper = c(Inf, Inf,   20, 50, Inf)
dtrain <- xgboost::xgb.DMatrix(data=X, label_lower_bound=y_lower, 
                               label_upper_bound=y_upper)
param <- list(objective="survival:aft", aft_loss_distribution="normal", 
              aft_loss_distribution_scale=1, max_depth=3, min_child_weight=0)
watchlist <- list(train = dtrain)
bst <- xgboost::xgb.train(params=param, data=dtrain, nrounds=15, 
                          watchlist=watchlist)
predict(bst, dtrain)
bst_cc <- irb.train(params=param, data=dtrain, nrounds=15, cfun="hcave",
                    s=1.5, trace=TRUE, verbose=0)
bst_cc$weight_update
# }

Run the code above in your browser using DataLab