AutoCatBoostRegression is an automated modeling function that runs a variety of steps. First, the function will run a random grid tune over N number of models and find which model is the best (a default model is always included in that set). Once the model is identified and built, several other outputs are generated: validation data with predictions, evaluation plot, evaluation boxplot, evaluation metrics, variable importance, partial dependence calibration plots, partial dependence calibration box plots, and column names used in model fitting. You can download the catboost package using devtools, via: devtools::install_github('catboost/catboost', subdir = 'catboost/R-package')
AutoCatBoostRegression(data, ValidationData = NULL, TestData = NULL,
TargetColumnName = NULL, FeatureColNames = NULL,
PrimaryDateColumn = NULL, IDcols = NULL,
TransformNumericColumns = NULL, task_type = "GPU",
eval_metric = "RMSE", Alpha = NULL, Trees = 50, GridTune = FALSE,
grid_eval_metric = "mae", MaxModelsInGrid = 10, model_path = NULL,
ModelID = "FirstModel", NumOfParDepPlots = 3,
ReturnModelObjects = TRUE, SaveModelObjects = FALSE,
PassInGrid = NULL)
This is your data set for training and testing your model
This is your holdout data set used in modeling either refine your hyperparameters. Catboost using both training and validation data in the training process so you should evaluate out of sample performance with this data set.
This is your holdout data set. Catboost using both training and validation data in the training process so you should evaluate out of sample performance with this data set.
Either supply the target column name OR the column number where the target is located (but not mixed types).
Either supply the feature column names OR the column number where the target is located (but not mixed types)
Supply a date or datetime column for catboost to utilize time as its basis for handling categorical features, instead of random shuffling
A vector of column names or column numbers to keep in your data but not include in the modeling.
Set to NULL to do nothing; otherwise supply the column names of numeric variables you want transformed
Set to "GPU" to utilize your GPU for training. Default is "CPU".
This is the metric used inside catboost to measure performance on validation data during a grid-tune. "RMSE" is the default, but other options include: "MAE", "MAPE", "Poisson", "Quantile", "LogLinQuantile", "Lq", "NumErrors", "SMAPE", "R2", "MSLE", "MedianAbsoluteError".
This is the quantile value you want to use for quantile regression. Must be a decimal between 0 and 1.
The maximum number of trees you want in your models
Set to TRUE to run a grid tuning procedure. Set a number in MaxModelsInGrid to tell the procedure how many models you want to test.
This is the metric used to find the threshold 'poisson', 'mae', 'mape', 'mse', 'msle', 'kl', 'cs', 'r2'
Number of models to test from grid options (1080 total possible options)
A character string of your path file to where you want your output saved
A character string to name your model and output
Tell the function the number of partial dependence calibration plots you want to create. Calibration boxplots will only be created for numerical features (not dummy variables)
Set to TRUE to output all modeling objects (E.g. plots and evaluation metrics)
Set to TRUE to return all modeling objects to your environment
Defaults to NULL. Pass in a single row of grid from a previous output as a data.table (they are collected as data.tables)
Saves to file and returned in list: VariableImportance.csv, Model, ValidationData.csv, EvalutionPlot.png, EvalutionBoxPlot.png, EvaluationMetrics.csv, ParDepPlots.R a named list of features with partial dependence calibration plots, ParDepBoxPlots.R, GridCollect, catboostgrid, and a transformation details file.
Other Automated Regression: AutoCatBoostHurdleModel
,
AutoH2oDRFHurdleModel
,
AutoH2oDRFRegression
,
AutoH2oGBMHurdleModel
,
AutoH2oGBMRegression
,
AutoNLS
,
AutoXGBoostHurdleModel
,
AutoXGBoostRegression
# NOT RUN {
Correl <- 0.85
N <- 1000
data <- data.table::data.table(Target = runif(N))
data[, x1 := qnorm(Target)]
data[, x2 := runif(N)]
data[, Independent_Variable1 := log(pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))]
data[, Independent_Variable2 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))]
data[, Independent_Variable3 := exp(pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))]
data[, Independent_Variable4 := exp(exp(pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2))))]
data[, Independent_Variable5 := sqrt(pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))]
data[, Independent_Variable6 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^0.10]
data[, Independent_Variable7 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^0.25]
data[, Independent_Variable8 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^0.75]
data[, Independent_Variable9 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^2]
data[, Independent_Variable10 := (pnorm(Correl * x1 +
sqrt(1-Correl^2) * qnorm(x2)))^4]
data[, Independent_Variable11 := as.factor(
ifelse(Independent_Variable2 < 0.20, "A",
ifelse(Independent_Variable2 < 0.40, "B",
ifelse(Independent_Variable2 < 0.6, "C",
ifelse(Independent_Variable2 < 0.8, "D", "E")))))]
data[, ':=' (x1 = NULL, x2 = NULL)]
TestModel <- AutoCatBoostRegression(data,
ValidationData = NULL,
TestData = NULL,
TargetColumnName = "Target",
FeatureColNames = c(2:12),
PrimaryDateColumn = NULL,
IDcols = NULL,
TransformNumericColumns = NULL,
MaxModelsInGrid = 1,
task_type = "GPU",
eval_metric = "RMSE",
Alpha = NULL,
grid_eval_metric = "r2",
Trees = 50,
GridTune = FALSE,
model_path = NULL,
ModelID = "ModelTest",
NumOfParDepPlots = 3,
ReturnModelObjects = TRUE,
SaveModelObjects = FALSE,
PassInGrid = NULL)
# }
Run the code above in your browser using DataLab