Learn R Programming

sits (version 1.1.0)

.torch_temporal_attention_encoder: Torch module for temporal attention encoder

Description

Defines a torch module for temporal attention encoding, inspired by the work of Vaswani et al(2017). Since Attention models contain no convolution, the model injects information about the relative position of the tokens in the sequence. Vaswani et al use sine and cosine functions of different frequencies.

This function is based on the paper by Vivien Garnot referenced below and code available on github at https://github.com/VSainteuf/pytorch-psetae.

We also used the code made available by Maja Schneider in her work with Marco Körner referenced below and available at https://github.com/maja601/RC2020-psetae.

If you use this method, please cite Garnot's and Schneider's work.

Usage

.torch_temporal_attention_encoder(
  timeline,
  dim_encoder = 128,
  n_heads = 4,
  input_out_enc_mlp = 512,
  hidden_nodes_out_enc_mlp = c(128, 128)
)

Value

A linear tensor block.

Arguments

timeline

Timeline of input time series.

dim_encoder

Dimension of the positional encoder.

n_heads

Number of attention heads..

input_out_enc_mlp

Dimensions of multi-layer perceptron used to encode the output (MLP3 in Garnot's paper)

hidden_nodes_out_enc_mlp

Hidden nodes in MLP used for output encoding (MLP3 in Garnot's paper)

Author

Charlotte Pelletier, charlotte.pelletier@univ-ubs.fr

Gilberto Camara, gilberto.camara@inpe.br

Rolf Simoes, rolf.simoes@inpe.br

Felipe Souza, lipecaso@gmail.com

References

Vivien Garnot, Loic Landrieu, Sebastien Giordano, and Nesrine Chehata, "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention", 2020 Conference on Computer Vision and Pattern Recognition. pages 12322-12331. DOI: 10.1109/CVPR42600.2020.01234

Schneider, Maja; Körner, Marco, "[Re] Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention." ReScience C 7 (2), 2021. DOI: 10.5281/zenodo.4835356