powered by
Gated Recurrent Unit recurrent layer. Implemented as an unrolled computation graph (BPTT).
ggml_layer_gru( model, units, return_sequences = FALSE, activation = "tanh", recurrent_activation = "sigmoid", input_shape = NULL, name = NULL, trainable = TRUE )
Updated model or a new ggml_tensor_node.
ggml_tensor_node
A ggml_sequential_model or ggml_tensor_node.
ggml_sequential_model
Integer, number of hidden units.
Logical; return all hidden states or only the last.
Activation for the candidate hidden state ("tanh").
"tanh"
Activation for z/r gates ("sigmoid").
"sigmoid"
Input shape c(seq_len, input_size) -- required for the first layer only.
c(seq_len, input_size)
Optional layer name.
Logical.
W_zh [input_size, 2*units] — input kernel for z and r gates.
W_zh
[input_size, 2*units]
U_zh [units, 2*units] — recurrent kernel for z and r.
U_zh
[units, 2*units]
b_zh [2*units] — bias for z and r.
b_zh
[2*units]
W_n [input_size, units] — input kernel for candidate.
W_n
[input_size, units]
U_n [units, units] — recurrent kernel for candidate.
U_n
[units, units]
b_n [units] — bias for candidate.
b_n
[units]
# \donttest{ model <- ggml_model_sequential() |> ggml_layer_gru(64L, input_shape = c(10L, 32L)) |> ggml_layer_dense(10L, activation = "softmax") # }
Run the code above in your browser using DataLab