Trains specified machine learning algorithms on the preprocessed training data.
train_models(
train_data,
label,
task,
algorithms,
resampling_method,
folds,
repeats,
resamples = NULL,
tune_params,
engine_params = list(),
metric,
summaryFunction = NULL,
seed = 123,
recipe,
use_default_tuning = FALSE,
tuning_strategy = "grid",
tuning_iterations = 10,
early_stopping = FALSE,
adaptive = FALSE,
algorithm_engines = NULL
)A list of trained model objects.
Preprocessed training data frame.
Name of the target variable.
Type of task: "classification", "regression", or "survival".
Vector of algorithm names to train.
Resampling method for cross-validation (e.g., "cv", "repeatedcv", "boot", "none").
Number of folds for cross-validation.
Number of times to repeat cross-validation (only applicable for methods like "repeatedcv").
Optional rsample object. If provided, custom resampling splits will be used instead of those created internally.
A named list of tuning ranges. For each algorithm, supply a
list of engine-specific parameter values, e.g.
list(rand_forest = list(ranger = list(mtry = c(1, 3)))).
A named list of fixed engine-level arguments passed
directly to the model fitting call for each algorithm/engine combination.
Use this to control options like ties = "breslow" for Cox models or
importance = "impurity" for ranger. Unlike tune_params, these
values are not tuned over a grid.
The performance metric to optimize.
A custom summary function for model evaluation. Default is NULL.
An integer value specifying the random seed for reproducibility.
A recipe object for preprocessing.
Logical; if TRUE and tune_params is NULL, tuning is performed using default grids. Tuning also occurs when custom tune_params are supplied. When FALSE and no custom parameters are given, the model is fitted once with default settings.
A string specifying the tuning strategy. Must be one of
"grid", "bayes", or "none". Adaptive methods may be
used with "grid". If "none" is selected, the workflow is fitted
directly without tuning.
If custom tune_params are supplied with tuning_strategy = "none",
they will be ignored with a warning.
Number of iterations for Bayesian tuning. Ignored
when tuning_strategy is not "bayes"; validation occurs only
for the Bayesian strategy.
Logical for early stopping in Bayesian tuning.
Logical indicating whether to use adaptive/racing methods.
A named list specifying the engine to use for each algorithm.