Also known as wide-n-deep
estimators, these are estimators for
TensorFlow Linear and DNN joined models for regression.
dnn_linear_combined_regressor(model_dir = NULL,
linear_feature_columns = NULL, linear_optimizer = "Ftrl",
dnn_feature_columns = NULL, dnn_optimizer = "Adagrad",
dnn_hidden_units = NULL, dnn_activation_fn = "relu",
dnn_dropout = NULL, label_dimension = 1L, weight_column = NULL,
input_layer_partitioner = NULL, config = NULL)dnn_linear_combined_classifier(model_dir = NULL,
linear_feature_columns = NULL, linear_optimizer = "Ftrl",
dnn_feature_columns = NULL, dnn_optimizer = "Adagrad",
dnn_hidden_units = NULL, dnn_activation_fn = "relu",
dnn_dropout = NULL, n_classes = 2L, weight_column = NULL,
label_vocabulary = NULL, input_layer_partitioner = NULL,
config = NULL)
Directory to save the model parameters, graph, and so on. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
The feature columns used by linear (wide) part of the model.
Either the name of the optimizer to be used when training the model, or a TensorFlow optimizer instance. Defaults to the FTRL optimizer.
The feature columns used by the neural network (deep) part in the model.
Either the name of the optimizer to be used when training the model, or a TensorFlow optimizer instance. Defaults to the Adagrad optimizer.
An integer vector, indicating the number of hidden
units in each layer. All layers are fully connected. For example,
c(64, 32)
means the first layer has 64 nodes, and the second layer
has 32 nodes.
The activation function to apply to each layer. This can either be an
actual activation function (e.g. tf$nn$relu
), or the name of an
activation function (e.g. "relu"
). Defaults to the
"relu"
activation function. See
https://www.tensorflow.org/api_guides/python/nn#Activation_Functions
for documentation related to the set of activation functions available
in TensorFlow.
When not NULL
, the probability we will drop out a given
coordinate.
Number of regression targets per example. This is the
size of the last dimension of the labels and logits Tensor
objects
(typically, these have shape [batch_size, label_dimension]
).
A string, or a numeric column created by
column_numeric()
defining feature column representing weights. It is used
to down weight or boost examples during training. It will be multiplied by
the loss of the example. If it is a string, it is used as a key to fetch
weight tensor from the features
argument. If it is a numeric column,
then the raw tensor is fetched by key weight_column$key
, then
weight_column$normalizer_fn
is applied on it to get weight tensor.
An optional partitioner for the input layer.
Defaults to min_max_variable_partitioner
with min_slice_size
64 << 20.
A run configuration created by run_config()
, used to configure the runtime
settings.
The number of label classes.
A list of strings represents possible label values.
If given, labels must be string type and have any value in
label_vocabulary
. If it is not given, that means labels are already
encoded as integer or float within [0, 1]
for n_classes == 2
and
encoded as integer values in {0, 1,..., n_classes -1}
for n_classes > 2
. Also there will be errors if vocabulary is not provided and labels are
string.
Other canned estimators: boosted_trees_estimators
,
dnn_estimators
,
linear_estimators