Learn R Programming

prophet (version 0.4)

performance_metrics: Compute performance metrics from cross-validation results.

Description

Computes a suite of performance metrics on the output of cross-validation. By default the following metrics are included: 'mse': mean squared error 'rmse': root mean squared error 'mae': mean absolute error 'mape': mean percent error 'coverage': coverage of the upper and lower intervals

Usage

performance_metrics(df, metrics = NULL, rolling_window = 0.1)

Arguments

df

The dataframe returned by cross_validation.

metrics

An array of performance metrics to compute. If not provided, will use c('mse', 'rmse', 'mae', 'mape', 'coverage').

rolling_window

Proportion of data to use in each rolling window for computing the metrics. Should be in [0, 1].

Value

A dataframe with a column for each metric, and column 'horizon'.

Details

A subset of these can be specified by passing a list of names as the `metrics` argument.

Metrics are calculated over a rolling window of cross validation predictions, after sorting by horizon. The size of that window (number of simulated forecast points) is determined by the rolling_window argument, which specifies a proportion of simulated forecast points to include in each window. rolling_window=0 will compute it separately for each simulated forecast point (i.e., 'mse' will actually be squared error with no mean). The default of rolling_window=0.1 will use 10 window. rolling_window=1 will compute the metric across all simulated forecast points. The results are set to the right edge of the window.

The output is a dataframe containing column 'horizon' along with columns for each of the metrics computed.