relu(...): Applies the rectified linear unit activation function.
elu(...): Exponential Linear Unit.
selu(...): Scaled Exponential Linear Unit (SELU).
hard_sigmoid(...): Hard sigmoid activation function.
linear(...): Linear activation function (pass-through).
sigmoid(...): Sigmoid activation function, sigmoid(x) = 1 / (1 + exp(-x)).
softmax(...): Softmax converts a vector of values to a probability distribution.
softplus(...): Softplus activation function, softplus(x) = log(exp(x) + 1).
softsign(...): Softsign activation function, softsign(x) = x / (abs(x) + 1).
tanh(...): Hyperbolic tangent activation function.
exponential(...): Exponential activation function.
gelu(...): Applies the Gaussian error linear unit (GELU) activation function.
swish(...): Swish activation function, swish(x) = x * sigmoid(x).
activation_relu(x, alpha = 0, max_value = NULL, threshold = 0)activation_elu(x, alpha = 1)
activation_selu(x)
activation_hard_sigmoid(x)
activation_linear(x)
activation_sigmoid(x)
activation_softmax(x, axis = -1)
activation_softplus(x)
activation_softsign(x)
activation_tanh(x)
activation_exponential(x)
activation_gelu(x, approximate = FALSE)
activation_swish(x)
Tensor with the same shape and dtype as x.
Tensor
Alpha value
Max value
Threshold value for thresholded activation.
Integer, axis along which the softmax normalization is applied
A bool, whether to enable approximation.
Activations functions can either be used through layer_activation(), or
through the activation argument supported by all forward layers.
activation_selu() to be used together with the initialization "lecun_normal".
activation_selu() to be used together with the dropout variant "AlphaDropout".
activation_swish(): Searching for Activation Functions
activation_gelu(): Gaussian Error Linear Units (GELUs)
activation_selu(): Self-Normalizing Neural Networks
activation_elu(): Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)