h2o4gpu (version 0.2.0)

h2o4gpu.gradient_boosting_classifier: Gradient Boosting Classifier

Description

Gradient Boosting Classifier

Usage

h2o4gpu.gradient_boosting_classifier(loss = "deviance", learning_rate = 0.1,
  n_estimators = 100L, subsample = 1, criterion = "friedman_mse",
  min_samples_split = 2L, min_samples_leaf = 1L,
  min_weight_fraction_leaf = 0, max_depth = 3L, min_impurity_decrease = 0,
  min_impurity_split = NULL, init = NULL, random_state = NULL,
  max_features = "auto", verbose = 0L, max_leaf_nodes = NULL,
  warm_start = FALSE, presort = "auto", colsample_bytree = 1,
  num_parallel_tree = 1L, tree_method = "gpu_hist", n_gpus = -1L,
  predictor = "gpu_predictor", backend = "h2o4gpu")

Arguments

loss

loss function to be optimized. 'deviance' refers to deviance (= logistic regression) for classification with probabilistic outputs. For loss 'exponential' gradient boosting recovers the AdaBoost algorithm.

learning_rate

learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators.

n_estimators

The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance.

subsample

The fraction of samples to be used for fitting the individual base learners. If smaller than 1.0 this results in Stochastic Gradient Boosting. subsample interacts with the parameter n_estimators. Choosing subsample < 1.0 leads to a reduction of variance and an increase in bias.

criterion

The function to measure the quality of a split. Supported criteria are "friedman_mse" for the mean squared error with improvement score by Friedman, "mse" for mean squared error, and "mae" for the mean absolute error. The default value of "friedman_mse" is generally the best as it can provide a better approximation in some cases.

min_samples_split

The minimum number of samples required to split an internal node:

min_samples_leaf

The minimum number of samples required to be at a leaf node:

min_weight_fraction_leaf

The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.

max_depth

maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables.

min_impurity_decrease

A node will be split if this split induces a decrease of the impurity greater than or equal to this value.

min_impurity_split

Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf.

init

An estimator object that is used to compute the initial predictions. init has to provide fit and predict. If NULL it uses loss.init_estimator.

random_state

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If NULL, the random number generator is the RandomState instance used by np.random.

max_features

The number of features to consider when looking for the best split:

verbose

Enable verbose output. If 1 then it prints progress and performance once in a while (the more trees the lower the frequency). If greater than 1 then it prints progress and performance for every tree.

max_leaf_nodes

Grow trees with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If NULL then unlimited number of leaf nodes.

warm_start

When set to TRUE, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just erase the previous solution.

presort

Whether to presort the data to speed up the finding of best splits in fitting. Auto mode by default will use presorting on dense data and default to normal sorting on sparse data. Setting presort to true on sparse data will raise an error.

colsample_bytree

Subsample ratio of columns when constructing each tree.

num_parallel_tree

Number of trees to grow per round

tree_method

The tree construction algorithm used in XGBoost Distributed and external memory version only support approximate algorithm. Choices: <U+2018>auto<U+2019>, <U+2018>exact<U+2019>, <U+2018>approx<U+2019>, <U+2018>hist<U+2019>, <U+2018>gpu_exact<U+2019>, <U+2018>gpu_hist<U+2019> <U+2018>auto<U+2019>: Use heuristic to choose faster one. - For small to medium dataset, exact greedy will be used. - For very large-dataset, approximate algorithm will be chosen. - Because old behavior is always use exact greedy in single machine, - user will get a message when approximate algorithm is chosen to notify this choice. <U+2018>exact<U+2019>: Exact greedy algorithm. <U+2018>approx<U+2019>: Approximate greedy algorithm using sketching and histogram. <U+2018>hist<U+2019>: Fast histogram optimized approximate greedy algorithm. It uses some performance improvements such as bins caching. <U+2018>gpu_exact<U+2019>: GPU implementation of exact algorithm. <U+2018>gpu_hist<U+2019>: GPU implementation of hist algorithm.

n_gpus

Number of gpu's to use in GradientBoostingClassifier solver. Default is -1.

predictor

The type of predictor algorithm to use. Provides the same results but allows the use of GPU or CPU. - 'cpu_predictor': Multicore CPU prediction algorithm. - 'gpu_predictor': Prediction using GPU. Default for 'gpu_exact' and 'gpu_hist' tree method.

backend

Which backend to use. Options are 'auto', 'sklearn', 'h2o4gpu'. Saves as attribute for actual backend used.