train_and_evaluate
EvalSpec
combines details of evaluation of the trained model as well as its
export. Evaluation consists of computing metrics to judge the performance of
the trained model. Export writes out the trained model on to external
storage.
eval_spec(
input_fn,
steps = 100,
name = NULL,
hooks = NULL,
exporters = NULL,
start_delay_secs = 120,
throttle_secs = 600
)
Evaluation input function returning a tuple of:
features - Tensor
or dictionary of string feature name to Tensor
.
labels - Tensor
or dictionary of Tensor
with labels.
Positive number of steps for which to evaluate model.
If NULL
, evaluates until input_fn
raises an end-of-input exception.
Name of the evaluation if user needs to run multiple evaluations on different data sets. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
List of session run hooks to run during evaluation.
List of Exporter
s, or a single one, or NULL
.
exporters
will be invoked after each evaluation.
Start evaluating after waiting for this many seconds.
Do not re-evaluate unless the last evaluation was started at least this many seconds ago. Of course, evaluation does not occur if no new checkpoints are available, hence, this is the minimum.
Other training methods:
train_and_evaluate.tf_estimator()
,
train_spec()