
Last chance! 50% off unlimited learning
Sale ends in
Functions allow to calculate different types of errors:
MAE - Mean Absolute Error,
MSE - Mean Squared Error,
MRE - Mean Root Error,
MPE - Mean Percentage Error,
MAPE - Mean Absolute Percentage Error,
SMAPE - Symmetric Mean Absolute Percentage Error,
MASE - Mean Absolute Scaled Error,
RelMAE - Relative Mean Absolute Error,
RelMSE - Relative Mean Squared Error,
RelAME - Relative Absolute Mean Error,
sMSE - Scaled Mean Squared Error,
sPIS- Scaled Periods-In-Stock,
sCE - Scaled Cumulative Error.
MAE(actual, forecast, digits = 3)MSE(actual, forecast, digits = 3)
MRE(actual, forecast, digits = 3)
MPE(actual, forecast, digits = 3)
MAPE(actual, forecast, digits = 3)
SMAPE(actual, forecast, digits = 3)
MASE(actual, forecast, scale, digits = 3)
RelMAE(actual, forecast, benchmark, digits = 3)
RelMSE(actual, forecast, benchmark, digits = 3)
RelAME(actual, forecast, benchmark, digits = 3)
sMSE(actual, forecast, scale, digits = 3)
sPIS(actual, forecast, scale, digits = 3)
sCE(actual, forecast, scale, digits = 3)
The vector or matrix of actual values.
The vector or matrix of forecasts values.
Number of digits of the output.
The value that should be used in the denominator of MASE. Can be anything but advised values are: mean absolute deviation of in-sample one step ahead Naive error or mean absolute value of the in-sample actuals.
The vector or matrix of the forecasts of the benchmark model.
All the functions return the scalar value.
In case of sMSE
, scale
needs to be a squared value. Typical
one -- squared mean value of in-sample actuals.
SMAPE is biased and prefers when you overforecast, so be careful when using it.
Svetunkov, I. (2017). Naughty APEs and the quest for the holy grail. https://forecasting.svetunkov.ru/en/2017/07/29/naughty-apes-and-the-quest-for-the-holy-grail/
Fildes R. (1992). The evaluation of extrapolative forecasting methods. International Journal of Forecasting, 8, pp.81-98.
Hyndman R.J., Koehler A.B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22, pp.679-688.
Petropoulos F., Kourentzes N. (2015). Forecast combinations for intermittent demand. Journal of the Operational Research Society, 66, pp.914-924.
Wallstrom P., Segerstedt A. (2010). Evaluation of forecasting error measurements and techniques for intermittent demand. International Journal of Production Economics, 128, pp.625-636.
Davydenko, A., Fildes, R. (2013). Measuring Forecasting Accuracy: The Case Of Judgmental Adjustments To Sku-Level Demand Forecasts. International Journal of Forecasting, 29(3), 510-522. https://doi.org/10.1016/j.ijforecast.2012.09.002
# NOT RUN {
y <- rnorm(100,10,2)
esmodel <- es(y[1:90],model="ANN",h=10)
MAE(y[91:100],esmodel$forecast,digits=5)
MSE(y[91:100],esmodel$forecast,digits=5)
MPE(y[91:100],esmodel$forecast,digits=5)
MAPE(y[91:100],esmodel$forecast,digits=5)
MASE(y[91:100],esmodel$forecast,mean(abs(y[1:90])),digits=5)
MASE(y[91:100],esmodel$forecast,mean(abs(diff(y[1:90]))),digits=5)
esmodel2 <- es(y[1:90],model="AAN",h=10)
RelMAE(y[91:100],esmodel2$forecast,esmodel$forecast,digits=5)
MASE(y[91:100],esmodel$forecast,mean(abs(y[1:90]))^2,digits=5)
sMSE(y[91:100],esmodel$forecast,mean(abs(y[1:90])),digits=5)
sPIS(y[91:100],esmodel$forecast,mean(abs(y[1:90])),digits=5)
sCE(y[91:100],esmodel$forecast,mean(abs(y[1:90])),digits=5)
# }
Run the code above in your browser using DataLab