Usage
h2o.deeplearning(x, y, training_frame, model_id = "",
overwrite_with_best_model, n_folds = 0, validation_frame, checkpoint,
autoencoder = FALSE, use_all_factor_levels = TRUE,
activation = c("Rectifier", "Tanh", "TanhWithDropout",
"RectifierWithDropout", "Maxout", "MaxoutWithDropout"), hidden = c(200,
200), epochs = 10, train_samples_per_iteration = -2, seed,
adaptive_rate = TRUE, rho = 0.99, epsilon = 1e-08, rate = 0.005,
rate_annealing = 1e-06, rate_decay = 1, momentum_start = 0,
momentum_ramp = 1e+06, momentum_stable = 0,
nesterov_accelerated_gradient = TRUE, input_dropout_ratio = 0,
hidden_dropout_ratios, l1 = 0, l2 = 0, max_w2 = Inf,
initial_weight_distribution = c("UniformAdaptive", "Uniform", "Normal"),
initial_weight_scale = 1, loss = c("Automatic", "CrossEntropy",
"MeanSquare", "Absolute", "Huber"), score_interval = 5,
score_training_samples, score_validation_samples, score_duty_cycle,
classification_stop, regression_stop, quiet_mode, max_confusion_matrix_size,
max_hit_ratio_k, balance_classes = FALSE, class_sampling_factors,
max_after_balance_size, score_validation_sampling, diagnostics,
variable_importances, fast_mode, ignore_const_cols, force_load_balance,
replicate_training_data, single_node_mode, shuffle_training_data, sparse,
col_major, average_activation, sparsity_beta, max_categorical_features,
reproducible = FALSE, export_weights_and_biases = FALSE, ...)
Arguments
x
A vector containing the character
names of the predictors in the model.
y
The name of the response variable in the model.
training_frame
An H2OFrame object containing the variables in the model.
model_id
(Optional) The unique id assigned to the resulting model. If
none is given, an id will automatically be generated.
overwrite_with_best_model
Logcial. If TRUE
, overwrite the final model with the best model found during training. Defaults to TRUE
.
n_folds
(Optional) Number of folds for cross-validation. If nfolds >= 2
, then validation
must remain empty.
validation_frame
(Optional) An H2OFrame
object indicating the validation dataset used to contruct the confusion matrix. If left blank, this defaults to the training data when nfolds = 0
checkpoint
"Model checkpoint (either key or H2ODeepLearningModel) to resume training with."
autoencoder
Enable auto-encoder for model building.
use_all_factor_levels
Logical
. Use all factor levels of categorical variance.
Otherwise the first factor level is omittted (without loss of accuracy). Useful for
variable imporotances and auto-enabled for autoencoder.
activation
A string indicating the activation function to use. Must be either "Tanh",
"TanhWithDropout", "Rectifier", "RectifierWithDropout", "Maxout", or "MaxoutWithDropout"
hidden
Hidden layer sizes (e.g. c(100,100))
epochs
How many times the dataset shoud be iterated (streamed), can be fractional
train_samples_per_iteration
Number of training samples (globally) per MapReduce iteration.
Special values are: 0 one epoch; -1 all available data (e.g., replicated
training data); or -2 auto-tuning (default)
seed
Seed for random numbers (affects sampling) - Note: only reproducible when running
single threaded
adaptive_rate
Logical
. Adaptive learning rate (ADAELTA)
rho
Adaptive learning rate time decay factor (similarity to prior updates)
epsilon
Adaptive learning rate parameter, similar to learn rate annealing during initial
training phase. Typical values are between 1.0e-10
and 1.0e-4
rate
Learning rate (higher => less stable, lower => slower convergence)
rate_annealing
Learning rate annealing: $(rate)/(1 + rate_annealing*samples)$
rate_decay
Learning rate decay factor between layers (N-th layer: $rate*\alpha^(N-1)$)
momentum_start
Initial momentum at the beginning of traning (try 0.5)
momentum_ramp
Number of training samples for which momentum increases
momentum_stable
Final momentum after ther amp is over (try 0.99)
nesterov_accelerated_gradient
Logical
. Use Nesterov accelerated gradient
(recommended)
input_dropout_ratio
A fraction of the features for each training row to be omitted from
training in order to improve generalization (dimension sampling).
hidden_dropout_ratios
Input layer dropout ration (can improve generalization) specify one
value per hidden layer, defaults to 0.5
l1
L1 regularization (can add stability and improve generalization, cause many weights to
become 0)
l2
L2 regularization (can add stability and improve generalization, causes many weights to
be small)
max_w2
Constraint for squared sum of incoming weights per unit (e.g. Rectifier)
initial_weight_distribution
Can be "Uniform", "UniformAdaptive", or "Normal"
initial_weight_scale
Unifrom: -value ... value, Normal: stddev
loss
Loss function: Automatic, CrossEntropy (for classification only), MeanSquare, Absolute
(experimental) or Huber (experimental)
score_interval
Shortest time interval (in secs) between model scoring
score_training_samples
Number of training set samples for scoring (0 for all)
score_validation_samples
Number of validation set samples for scoring (0 for all)
score_duty_cycle
Maximum duty cycle fraction for scoring (lower: more training, higher:
more scoring)
classification_stop
Stopping criterion for classification error fraction on training data
(-1 to disable)
regression_stop
Stopping criterion for regression error (MSE) on training data (-1 to
disable)
quiet_mode
Enable quiet mode for less output to standard output
max_confusion_matrix_size
Max. size (number of classes) for confusion matrices to be shown
max_hit_ratio_k
Max number (top K) of predictions to use for hit ration computation(for
multi-class only, 0 to disable)
balance_classes
Balance training data class counts via over/under-sampling (for imbalanced
data)
class_sampling_factors
Desired over/under-sampling ratios per class (in lexicographic
order). If not specified, sampling factors will be automatically computed to obtain class
balance during training. Requires balance_classes.
max_after_balance_size
Maximum relative size of the training data after balancing class
counts (can be less than 1.0)
score_validation_sampling
Method used to sample validation dataset for scoring
diagnostics
Enable diagnostics for hidden layers
variable_importances
Compute variable importances for input features (Gedeon method) - can
be slow for large networks)
fast_mode
Enable fast mode (minor approximations in back-propagation)
ignore_const_cols
Ignore constant columns (no information can be gained anwyay)
force_load_balance
Force extra load balancing to increase training speed for small
datasets (to keep all cores busy)
replicate_training_data
Replicate the entire training dataset onto every node for faster
training
single_node_mode
Run on a single node for fine-tuning of model parameters
shuffle_training_data
Enable shuffling of training data (recommended if training data is
replicated and train_samples_per_iteration is close to $numRows*numNodes$
sparse
Sparse data handling (Experimental)
col_major
Use a column major weight matrix for input layer. Can speed up forward
proagation, but might slow down backpropagation (Experimental)
average_activation
Average activation for sparse auto-encoder (Experimental)
sparsity_beta
Sparsity regularization (Experimental)
max_categorical_features
Max. number of categorical features, enforced via hashing
Experimental)
reproducible
Force reproducibility on small data (will be slow - only uses 1 thread)
export_weights_and_biases
Whether to export Neural Network weights and biases to H2O
Frames"
...
extra parameters to pass onto functions (not implemented)