Learn R Programming

echos (version 1.0.3)

tune_esn: Tune hyperparameters of an Echo State Network

Description

Tune hyperparameters of an Echo State Network (ESN) based on time series cross-validation (i.e., rolling forecast). The input series is split into n_split expanding-window train/test sets with test size n_ahead. For each split and each hyperparameter combination (alpha, rho, tau) an ESN is trained via train_esn() and forecasts are generated via forecast_esn().

Usage

tune_esn(
  y,
  n_ahead = 12,
  n_split = 5,
  alpha = seq(0.1, 1, by = 0.1),
  rho = seq(0.1, 1, by = 0.1),
  tau = c(0.1, 0.2, 0.4),
  min_train = NULL,
  ...
)

Value

An object of class "tune_esn" (a list) with:

  • pars: A tibble with one row per hyperparameter combination and split. Columns include alpha, rho, tau, split, train_start, train_end, test_start, test_end, mse, mae, and id.

  • fcst: A numeric matrix of point forecasts with nrow(fcst) == nrow(pars) and ncol(fcst) == n_ahead.

  • actual: The original input series y (numeric vector), returned for convenience.

Arguments

y

Numeric vector containing the response variable (no missing values).

n_ahead

Integer value. The number of periods for forecasting (i.e. forecast horizon).

n_split

Integer value. The number of rolling train/test splits.

alpha

Numeric vector. The candidate leakage rates (smoothing parameters).

rho

Numeric vector. The candidate spectral radii.

tau

Numeric vector. The candidate reservoir scaling values.

min_train

Integer value. Minimum training sample size for the first split.

...

Further arguments passed to train_esn() (except alpha, rho, and tau, which are set by the tuning grid).

References

  • Häußer, A. (2026). Echo State Networks for Time Series Forecasting: Hyperparameter Sweep and Benchmarking. arXiv preprint arXiv:2602.03912, 2026. https://arxiv.org/abs/2602.03912

  • Jaeger, H. (2001). The “echo state” approach to analysing and training recurrent neural networks with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148(34):13.

  • Jaeger, H. (2002). Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the "echo state network" approach.

  • Lukosevicius, M. (2012). A practical guide to applying echo state networks. In Neural Networks: Tricks of the Trade: Second Edition, pages 659–686. Springer.

  • Lukosevicius, M. and Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3):127–149.

See Also

Other base functions: forecast_esn(), is.esn(), is.forecast_esn(), is.tune_esn(), plot.esn(), plot.forecast_esn(), plot.tune_esn(), print.esn(), summary.esn(), summary.tune_esn(), train_esn()

Examples

Run this code
xdata <- as.numeric(AirPassengers)
fit <- tune_esn(
  y = xdata,
  n_ahead = 12,
  n_split = 5,
  alpha = c(0.5, 1),
  rho   = c(1.0),
  tau   = c(0.4),
  inf_crit = "bic"
)
summary(fit)
plot(fit)

Run the code above in your browser using DataLab