powered by
A basic Adam optimizer that includes "correct" L2 weight decay.
AdamWeightDecayOptimizer( learning_rate, weight_decay_rate = 0, beta_1 = 0.9, beta_2 = 0.999, epsilon = 1e-06, exclude_from_weight_decay = NULL, name = "AdamWeightDecayOptimizer" )
Numeric Tensor (single element?); learning rate.
Numeric; weight decay rate.
Numeric; parameter for Adam.
Numeric; a tiny number to put a cap on update size by avoiding dividing by even smaller numbers.
Character; list of parameter names to exclude from weight decay.
Character; the name of the constructed object.
An object of class "AdamWeightDecayOptimizer", which is a (hacky) modification of the tf.train.Optimizer class.
Inherits from class tf.train.Optimizer. https://devdocs.io/tensorflow~python/tf/train/optimizer
# NOT RUN { with(tensorflow::tf$variable_scope("examples", reuse = tensorflow::tf$AUTO_REUSE ), { optimizer <- AdamWeightDecayOptimizer(learning_rate = 0.01) }) # }
Run the code above in your browser using DataLab