Tune hyperparameters of an Echo State Network (ESN) based on
time series cross-validation (i.e., rolling forecast). The input series is
split into n_split expanding-window train/test sets with test size
n_ahead. For each split and each hyperparameter combination
(alpha, rho, tau) an ESN is trained via train_esn() and
forecasts are generated via forecast_esn().
tune_esn(
y,
n_ahead = 12,
n_split = 5,
alpha = seq(0.1, 1, by = 0.1),
rho = seq(0.1, 1, by = 0.1),
tau = c(0.1, 0.2, 0.4),
min_train = NULL,
...
)An object of class "tune_esn" (a list) with:
pars: A tibble with one row per hyperparameter combination and split. Columns include
alpha, rho, tau, split, train_start, train_end, test_start,
test_end, mse, mae, and id.
fcst: A numeric matrix of point forecasts with nrow(fcst) == nrow(pars) and
ncol(fcst) == n_ahead.
actual: The original input series y (numeric vector), returned for convenience.
Numeric vector containing the response variable (no missing values).
Integer value. The number of periods for forecasting (i.e. forecast horizon).
Integer value. The number of rolling train/test splits.
Numeric vector. The candidate leakage rates (smoothing parameters).
Numeric vector. The candidate spectral radii.
Numeric vector. The candidate reservoir scaling values.
Integer value. Minimum training sample size for the first split.
Further arguments passed to train_esn() (except alpha, rho, and tau, which are set by the tuning grid).
Häußer, A. (2026). Echo State Networks for Time Series Forecasting: Hyperparameter Sweep and Benchmarking. arXiv preprint arXiv:2602.03912, 2026. https://arxiv.org/abs/2602.03912
Jaeger, H. (2001). The “echo state” approach to analysing and training recurrent neural networks with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148(34):13.
Jaeger, H. (2002). Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the "echo state network" approach.
Lukosevicius, M. (2012). A practical guide to applying echo state networks. In Neural Networks: Tricks of the Trade: Second Edition, pages 659–686. Springer.
Lukosevicius, M. and Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3):127–149.
Other base functions:
forecast_esn(),
is.esn(),
is.forecast_esn(),
is.tune_esn(),
plot.esn(),
plot.forecast_esn(),
plot.tune_esn(),
print.esn(),
summary.esn(),
summary.tune_esn(),
train_esn()
xdata <- as.numeric(AirPassengers)
fit <- tune_esn(
y = xdata,
n_ahead = 12,
n_split = 5,
alpha = c(0.5, 1),
rho = c(1.0),
tau = c(0.4),
inf_crit = "bic"
)
summary(fit)
plot(fit)
Run the code above in your browser using DataLab