Unlimited learning, half price | 50% off

Last chance! 50% off unlimited learning

Sale ends in


sits (version 1.1.0)

.torch_positional_encoder: Torch module for positional encoder

Description

Defines a torch module for positional encoding, based on the concepts of Vaswani et al (2017) and Garnot et al ()

This function part of the implementation of the paper by Vivien Garnot referenced below. We used the code made available by Maja Schneider in her work with Marco Körner referenced below and available at https://github.com/maja601/RC2020-psetae.

Usage

.torch_positional_encoding(timeline, dim_encoder = 128)

Value

A tensor block.

Arguments

timeline

Timeline of input time series.

dim_encoder

Dimension of the positional encoder.

Author

Charlotte Pelletier, charlotte.pelletier@univ-ubs.fr

Gilberto Camara, gilberto.camara@inpe.br

Rolf Simoes, rolf.simoes@inpe.br

Felipe Souza, lipecaso@gmail.com

References

Vivien Sainte Fare Garnot and Loic Landrieu, "Lightweight Temporal Self-Attention for Classifying Satellite Image Time Series", https://arxiv.org/abs/2007.00586

Schneider, Maja; Körner, Marco, "[Re] Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention." ReScience C7(2), 2021.

This function part of the implementation of the paper by Vivien Garnot referenced below.

We used the code made available by Maja Schneider in her work with Marco Körner referenced below and available at https://github.com/maja601/RC2020-psetae.

Vivien Sainte Fare Garnot and Loic Landrieu, "Lightweight Temporal Self-Attention for Classifying Satellite Image Time Series", https://arxiv.org/abs/2007.00586

Schneider, Maja; Körner, Marco, "[Re] Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention." ReScience C7(2), 2021.