
Last chance! 50% off unlimited learning
Sale ends in
AutoXGBoostMultiClass is an automated XGBoost modeling framework with grid-tuning and model evaluation that runs a variety of steps. First, a stratified sampling (by the target variable) is done to create train and validation sets. Then, the function will run a random grid tune over N number of models and find which model is the best (a default model is always included in that set). Once the model is identified and built, several other outputs are generated: validation data with predictions, evaluation metrics, variable importance, and column names used in model fitting.
AutoXGBoostMultiClass(data, ValidationData = NULL, TestData = NULL,
TargetColumnName = NULL, FeatureColNames = NULL, IDcols = NULL,
eval_metric = "merror", Trees = 50, GridTune = FALSE,
grid_eval_metric = "merror", TreeMethod = "hist",
Objective = "multi:softmax", MaxModelsInGrid = 10, NThreads = 8,
model_path = NULL, ModelID = "FirstModel", Verbose = 0,
ReturnModelObjects = TRUE, SaveModelObjects = FALSE,
PassInGrid = NULL)
This is your data set for training and testing your model
This is your holdout data set used in modeling either refine your hyperparameters.
This is your holdout data set. Catboost using both training and validation data in the training process so you should evaluate out of sample performance with this data set.
Either supply the target column name OR the column number where the target is located (but not mixed types). Target should be in factor or character form.
Either supply the feature column names OR the column number where the target is located (but not mixed types)
A vector of column names or column numbers to keep in your data but not include in the modeling.
This is the metric used to identify best grid tuned model. Choose from "merror", "mlogloss"
The maximum number of trees you want in your models
Set to TRUE to run a grid tuning procedure. Set a number in MaxModelsInGrid to tell the procedure how many models you want to test.
Set to "accuracy" (only option currently)
Choose from "hist", "gpu_hist"
Choose from 'multi:softmax' or 'multi:softprob'
Number of models to test from grid options (243 total possible options)
Set the maximum number of threads you'd like to dedicate to the model run. E.g. 8
A character string of your path file to where you want your output saved
A character string to name your model and output
Set to 0 if you want to suppress model evaluation updates in training
Set to TRUE to output all modeling objects (E.g. plots and evaluation metrics)
Set to TRUE to return all modeling objects to your environment
Default is NULL. Provide a data.table of grid options from a previous run.
Saves to file and returned in list: VariableImportance.csv, Model, ValidationData.csv, EvaluationMetrics.csv, GridCollect, GridList, and TargetLevels
Other Automated MultiClass Classification: AutoCatBoostMultiClass
,
AutoH2oDRFMultiClass
,
AutoH2oGBMMultiClass
# NOT RUN {
Correl <- 0.85
N <- 10000
data <- data.table::data.table(Target = runif(N))
data[, x1 := qnorm(Target)]
data[, x2 := runif(N)]
data[, Independent_Variable1 := log(pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))]
data[, Independent_Variable2 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))]
data[, Independent_Variable3 := exp(pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))]
data[, Independent_Variable4 := exp(exp(pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2))))]
data[, Independent_Variable5 := sqrt(pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))]
data[, Independent_Variable6 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^0.10]
data[, Independent_Variable7 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^0.25]
data[, Independent_Variable8 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^0.75]
data[, Independent_Variable9 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^2]
data[, Independent_Variable10 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^4]
data[, Target := as.factor(
ifelse(Independent_Variable2 < 0.20, "A",
ifelse(Independent_Variable2 < 0.40, "B",
ifelse(Independent_Variable2 < 0.6, "C",
ifelse(Independent_Variable2 < 0.8, "D", "E")))))]
data[, Independent_Variable11 := as.factor(
ifelse(Independent_Variable2 < 0.25, "A",
ifelse(Independent_Variable2 < 0.35, "B",
ifelse(Independent_Variable2 < 0.65, "C",
ifelse(Independent_Variable2 < 0.75, "D", "E")))))]
data[, ':=' (x1 = NULL, x2 = NULL)]
TestModel <- AutoXGBoostMultiClass(data,
ValidationData = NULL,
TestData = NULL,
TargetColumnName = 1,
FeatureColNames = 2:12,
IDcols = NULL,
eval_metric = "merror",
Trees = 50,
GridTune = TRUE,
grid_eval_metric = "accuracy",
MaxModelsInGrid = 10,
NThreads = 8,
TreeMethod = "hist",
Objective = 'multi:softmax',
model_path = getwd(),
ModelID = "FirstModel",
ReturnModelObjects = TRUE,
SaveModelObjects = FALSE,
PassInGrid = NULL)
# }
Run the code above in your browser using DataLab