Function for fitting gradient boosted logicDT models.
# S3 method for default
logicDT.boosting(
X,
y,
Z = NULL,
boosting.iter = 500,
learning.rate = 0.01,
subsample.frac = 1,
replace = TRUE,
line.search = "min",
...
)# S3 method for formula
logicDT.boosting(formula, data, ...)
An object of class logic.boosted
. This is a list
containing
models
A list of fitted logicDT
models
rho
A vector of boosting coefficient corresponding to each model
initialModel
Initial model which is usually the observed mean
...
Supplied parameters of the functional call
to logicDT.boosting
.
Matrix or data frame of binary predictors coded as 0 or 1.
Response vector. 0-1 coding for binary responses. Otherwise, a regression task is assumed.
Optional matrix or data frame of quantitative/continuous covariables. Multiple covariables allowed for splitting the trees. If leaf regression models (such as four parameter logistic models) shall be fitted, only the first given covariable is used.
Number of boosting iterations
Learning rate for boosted models. Values between 0.001 and 0.1 are recommended.
Subsample fraction for each boosting iteration. E.g., 0.5 means that are random draw of 50 is used in each iteration.
Should the random draws with subsample.frac
in boosted models be performed with or without
replacement? TRUE
or FALSE
Type of line search for gradient boosting.
"min"
performs a real minimization while "binary"
performs
a loose binary search for a boosting coefficient that
just reduces the score.
Arguments passed to logicDT
An object of type formula
describing the
model to be fitted.
A data frame containing the data for the corresponding
formula
object. Must also contain quantitative covariables
if they should be included as well.
Details on single logicDT models can be found in logicDT
.
Lau, M., Schikowski, T. & Schwender, H. (2024). logicDT: A procedure for identifying response-associated interactions between binary predictors. Machine Learning 113(2):933–992. tools:::Rd_expr_doi("https://doi.org/10.1007/s10994-023-06488-6")
Friedman, J. H. (2001). Greedy Function Approximation: A Gradient Boosting Machine. The Annals of Statistics, 29(5), 1189–1232. tools:::Rd_expr_doi("https://doi.org/10.1214/aos/1013203451")