
Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization
nn_layer_norm(normalized_shape, eps = 1e-05, elementwise_affine = TRUE)
(int or list): input shape from an expected input
of size
a value added to the denominator for numerical stability. Default: 1e-5
a boolean value that when set to TRUE
, this module
has learnable per-element affine parameters initialized to ones (for weights)
and zeros (for biases). Default: TRUE
.
Input:
Output:
The mean and standard-deviation are calculated separately over the last
certain number dimensions which have to be of the shape specified by
normalized_shape
.
normalized_shape
if elementwise_affine
is TRUE
.
The standard-deviation is calculated via the biased estimator, equivalent to
torch_var(input, unbiased=FALSE)
.
if (torch_is_installed()) {
input <- torch_randn(20, 5, 10, 10)
# With Learnable Parameters
m <- nn_layer_norm(input$size()[-1])
# Without Learnable Parameters
m <- nn_layer_norm(input$size()[-1], elementwise_affine = FALSE)
# Normalize over last two dimensions
m <- nn_layer_norm(c(10, 10))
# Normalize over last dimension of size 10
m <- nn_layer_norm(10)
# Activating the module
output <- m(input)
}
Run the code above in your browser using DataLab