Implements a time-aware UNet convolutional neural network for spatial downscaling of grid data. Time-aware UNet features an encoder-decoder architecture with skip connections and a temporal module. The function allows an option for adding a temporal module for spatio-temporal applications.
unet(
coarse_data,
fine_data,
time_points = NULL,
val_coarse_data = NULL,
val_fine_data = NULL,
val_time_points = NULL,
cyclical_period = NULL,
cycle_onehot = FALSE,
cos_sin_transform = FALSE,
temporal_basis = c(9, 17, 37),
temporal_layers = c(32, 64, 128),
temporal_cnn_filters = c(8, 16),
temporal_cnn_kernel_sizes = list(c(3, 3), c(3, 3)),
initial_filters = c(16),
initial_kernel_sizes = list(c(3, 3)),
filters = c(32, 64, 128),
kernel_sizes = list(c(3, 3), c(3, 3), c(3, 3)),
use_batch_norm = FALSE,
dropout_rate = 0.2,
activation = "relu",
final_activation = "linear",
optimizer = "adam",
learning_rate = 0.001,
loss = "mse",
metrics = c(),
batch_size = 32,
epochs = 10,
start_from_model = NULL,
validation_split = 0,
normalize = TRUE,
callbacks = NULL,
seed = NULL,
verbose = 1
)List containing the trained model and associated components:
Trained Keras model
Mask for input data based on the missing values
Mask for target data based on the missing values
Minimum time point in the training data
Maximum time point in the training data
Cyclical period for time encoding
Maximum season for time encoding
Names of the axes in the input data
Training history
3D array. The coarse resolution input data in format (x, y, time).
3D array. The fine resolution target data in format (x, y, time).
Numeric vector. Optional time points corresponding to each time step in the data.
An optional 3D array of coarse resolution input data in format (x, y, time).
An optional 3D array of fine resolution target data in format (x, y, time).
An optional numeric vector of length n representing the time points of the validation samples.
Numeric. Optional period for cyclical time encoding (e.g., 365 for yearly seasonality).
Boolean. If TRUE, a onehot encoded vector of temporal cycles is added as input to temporal module.
Logical. Whether to use cosine-sine transformation for time features. Default: FALSE.
A numeric vector specifying the temporal basis functions to use for time encoding (default is c(9, 17, 37)).
A numeric vector specifying the number of units in each dense layer for time encoding (default is c(32, 64, 128)).
A numeric vector specifying the number of filters in each convolutional layer for temporal feature processing (default is c(8, 16)).
A list of integer vectors specifying the kernel sizes for each convolutional layer in the temporal feature processing (default is list(c(3, 3), c(3, 3))).
Integer vector. Number of filters in the initial convolutional layers. Default: c(16).
List of integer vectors. Kernel sizes for the initial convolutional layers. Default: list(c(3, 3)).
Integer vector. Number of filters in each convolutional layer. Default: c(32, 64, 128).
List of integer vectors. Kernel sizes for each convolutional layer. Default: list(c(3, 3), c(3, 3), c(3, 3)).
Logical. Whether to use batch normalization after convolutional layers. Default: FALSE.
Numeric. Dropout rate for regularization. Default: 0.2.
Character. Activation function for hidden layers. The options are listed in https://keras.io/api/layers/activations. Default: "relu".
Character. Activation function for output layer. The options are listed in https://keras.io/api/layers/activations. Default: "linear".
Character or optimizer object used in keras3::compile (see e.g. optimizer_adam). Optimizer for training. The options are listed in https://keras.io/api/optimizers. Default: "adam".
Numeric. Learning rate for optimizer. Default: 0.001.
Character or loss function used in keras3::compile (see Loss). Loss function for training. The options are listed in https://keras.io/api/losses. Default: "mse".
Optional character vector used in keras3::compile. Metrics to track during training. The options are listed in https://keras.io/api/metrics. Default is an empty vector.
Integer. Batch size for training. Default: 32.
Integer. Number of training epochs. Default: 100.
An optional pre-trained Keras model to continue training from (default is NULL).
Numeric. Fraction of data to use for validation. Default: 0.
Logical. Whether to normalize data before training. Default: TRUE.
List. Keras callbacks for training (see Callback). Default: NULL.
Integer. Random seed for reproducibility. Default: NULL.
Integer. Verbosity mode (0, 1, or 2). Default: 1.
The UNet architecture ronneberger2015uSpatialDownscaling is widely used in image processing tasks and has recently been adopted for spatial downscaling applications sha2020deepSpatialDownscaling. The method implemented here consists of:
Initial Upscaling – Coarse-resolution inputs are first upsampled using bilinear interpolation to match the spatial dimensions of the fine-resolution target.
Initial Feature Extraction – Multiple convolutional layers extract low-level features before entering the encoder path.
Encoder Path – A sequence of convolutional blocks with max-pooling reduces spatial dimensions while increasing feature depth.
Decoder Path – Spatial resolution is recovered via bilinear upsampling and convolutional layers. Skip connections from the encoder help preserve fine-scale information.
Skip Connections – These link encoder and decoder layers at matching resolutions, improving gradient flow and retaining fine spatial structure.
Temporal Module (optional) – Time information can be incorporated through cosine–sine encoding, one-hot seasonal encoding, or radial-basis temporal features. These are passed through dense layers and reshaped to merge with the UNet bottleneck.
The function supports missing data via masking, optional normalization, validation data, and configurable UNet depth and width.
# \donttest{
# Create tiny dummy data:
# Coarse grid: 8x8 → Fine grid: 16x16
nx_c <- 8
ny_c <- 8
nx_f <- 16
ny_f <- 16
T <- 5 # number of time steps
# Coarse data:
coarse_data <- array(runif(nx_c * ny_c * T),
dim = c(nx_c, ny_c, T))
# Fine data:
fine_data <- array(runif(nx_f * ny_f * T),
dim = c(nx_f, ny_f, T))
# Optional time points
time_points <- 1:T
# Fit a tiny UNet (very small filters to keep the example fast)
model_obj <- unet(
coarse_data,
fine_data,
time_points = time_points,
filters = c(8, 16),
initial_filters = c(4),
epochs = 1,
batch_size = 4,
verbose = 0
)
# }
Run the code above in your browser using DataLab