train_lightgbm is a wrapper for lightgbm tree-based models
where all of the model arguments are in the main function.
train_lightgbm(
x,
y,
weights = NULL,
max_depth = -1,
num_iterations = 100,
learning_rate = 0.1,
feature_fraction_bynode = 1,
min_data_in_leaf = 20,
min_gain_to_split = 0,
bagging_fraction = 1,
early_stopping_round = NULL,
validation = 0,
counts = TRUE,
quiet = FALSE,
...
)A fitted lightgbm.Model object.
A data frame or matrix of predictors
A vector (factor or numeric) or matrix (numeric) of outcome data.
A numeric vector of sample weights.
An integer for the maximum depth of the tree.
An integer for the number of boosting iterations.
A numeric value between zero and one to control the learning rate.
Fraction of predictors that will be randomly sampled at each split.
A numeric value for the minimum sum of instances needed in a child to continue to split.
A number for the minimum loss reduction required to make a further partition on a leaf node of the tree.
Subsampling proportion of rows. Setting this argument
to a non-default value will also set bagging_freq = 1. See the Bagging
section in ?details_boost_tree_lightgbm for more details.
Number of iterations without an improvement in the objective function occur before training should be halted.
The proportion of the training data that are used for performance assessment and potential early stopping.
A logical; should feature_fraction_bynode be interpreted as the
number of predictors that will be randomly sampled at each split?
TRUE indicates that mtry will be interpreted in its sense as a count,
FALSE indicates that the argument will be interpreted in its sense as a
proportion.
A logical; should logging by lightgbm::lgb.train() be muted?
Other options to pass to lightgbm::lgb.train(). Arguments
will be correctly routed to the param argument, or as a main argument,
depending on their name.
This is an internal function, not meant to be directly called by the user.