Implements the operation:
output = activation(dot(input, kernel) + bias)
activation is the element-wise activation function passed as the
kernel is a weights matrix created by the layer, and
bias is a bias vector created by the layer (only applicable if
TRUE). Note: if the input to the layer has a rank greater than 2, then
it is flattened prior to the initial dot product with
layer_dense( object, units, activation = NULL, use_bias = TRUE, kernel_initializer = "glorot_uniform", bias_initializer = "zeros", kernel_regularizer = NULL, bias_regularizer = NULL, activity_regularizer = NULL, kernel_constraint = NULL, bias_constraint = NULL, input_shape = NULL, batch_input_shape = NULL, batch_size = NULL, dtype = NULL, name = NULL, trainable = NULL, weights = NULL )
Model or layer object
Positive integer, dimensionality of the output space.
Name of activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
Whether the layer uses a bias vector.
Initializer for the
kernel weights matrix.
Initializer for the bias vector.
Regularizer function applied to the
Regularizer function applied to the bias vector.
Regularizer function applied to the output of the layer (its "activation")..
Constraint function applied to the
Constraint function applied to the bias vector.
Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model.
Shapes, including the batch size. For instance,
batch_input_shape=c(10, 32) indicates that the expected input will be
batches of 10 32-dimensional vectors.
indicates batches of an arbitrary number of 32-dimensional vectors.
Fixed batch size for layer
The data type expected by the input, as a string (
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
Whether the layer weights will be updated during training.
Initial weights for layer.
Input shape: nD tensor with shape:
(batch_size, ..., input_dim). The most
common situation would be a 2D input with shape
Output shape: nD tensor with shape:
(batch_size, ..., units). For
instance, for a 2D input with shape
(batch_size, input_dim), the output
would have shape
Other core layers: