Learn R Programming

DriveML (version 0.1.0)

autoMLmodel: Automated machine learning training of models

Description

Automated training, tuning and validation of machine learning models. Models are tuned and resampling validated on an experiment set and trained on the full set and validated and testing on external sets. Classification models tune the probability threshold automatically and returns the results. Each model contains information of performance, the trained model as well as some plots.

Usage

autoMLmodel(train, test = NULL, score = NULL, target = NULL,
  testSplit = 0.2, tuneIters = 200, tuneType = "random",
  models = "all", perMetric = "auc", varImp = 20, liftGroup = 50,
  maxObs = 10000, uid = NULL, pdp = FALSE, positive = 1,
  htmlreport = FALSE, seed = 1991, verbose = FALSE)

Arguments

train

[data.frame | Required] Training set

test

[data.frame | Optional] Optional testing set to validate models on. If none is provided, one will be created internally. Default of NULL

score

[data.frame | Optional] Optional score the models on best trained model based on AUC. If none is provided, scorelist will be null. Default of NULL

target

[integer | Required] If a target is provided classification or regression models will be trained, if left as NULL unsupervised models will be trained. Default of NULL

testSplit

[numeric | Optional] Percentage of data to allocate to the test set. Stratified sampling is done. Default of 0.1

tuneIters

[integer | Optional] Number of tuning iterations to search for optimal hyper parameters. Default of 10

tuneType

[character | Optional] Tune method applied, list of options are:

  • "random" - random search hyperparameter tuning

  • "frace - frace uses iterated f-racing algorithm for the best solution from irace package

models

[character | Optional] Which models to train. Default is all. List of strings denoting which algorithms to use for the process:

  • randomForestRandom forests using the randomForest package

  • rangerRandom forests using the ranger package

  • xgboostGradient boosting using xgboost

  • rpartdecision tree classification using rpart

  • glmnetregularised regression from glmnet

  • logreglogistic regression from stats

perMetric

[character | Optional] Model validation metric. Default "auc"

  • auc - Area under the curve; mlr::auc

  • accuracy - Accuracy; mlr::acc

  • balancedAccuracy - Balanced accuracy; mlr::bac

  • brier - Brier score; mlr::brier

  • f1 - F1 measure; mlr::f1

  • meanPrecRecall - Geometric mean of precision and recall; mlr::gpr

  • logloss - Logarithmic loss; mlr:logloss

varImp

[integer | Optional] Number of important features to plot

liftGroup

[integer | Optional] Number of lift value to validate the test model performance

maxObs

[numeric | Optional] Number of observations in the experiment training set on which models are trained, tuned and resampled on. Default of 40000. If the training set has less than 40k observations all will be used

uid

[character | Optional] unique variable to keep in test output data

pdp

[logical | Optional] Partial dependence plot for top important variables

positive

[character | Optional] positive class for the target variable

htmlreport

[logical | Optional] to view the model outcome in html format

seed

[integer | Optional] Random number seed for reproducible results

verbose

[logical | Optional] display executions steps on console. Default FALSE

Value

List output contains trained models and results

Details

all the models trained using mlr train function, all of the functionality in mlr package can be applied to the autoMLmodel outcome

autoMLmodel provides below information of the machine learning classification models

  • trainedModels - Model level list output contains trained model object, hyper parameters, tuned data, test data, performance and Model plots

  • results - Summary of all trained model result like AUC, Precision, Recall, F1 score

  • modelexp - Model gain chart

  • predicted_score - Predicted score

  • datasummary - Summary of the input data

See Also

mlr train caret train makeLearner tuneParams

Examples

Run this code
# NOT RUN {
# Run only Logistic regression model
mymodel <- autoMLmodel( train = heart, test = NULL, target = 'target_var',
testSplit = 0.2, tuneIters = 10, tuneType = "random", models = "logreg",
varImp = 10, liftGroup = 50, maxObs = 4000, uid = NULL, seed = 1991)
# }

Run the code above in your browser using DataLab