
Last chance! 50% off unlimited learning
Sale ends in
AutoETS is a multi-armed bandit model testing framework for AR and SAR NNets. Randomized probability matching is the underlying bandit algorithm. Model evaluation is done by blending the training error and the validation error from testing the model on out of sample data. The bandit algorithm compares the performance of the current build against the previous builds which starts with the classic nnetar model from the forecast package. Depending on how many lags, seasonal lags, and fourier pairs you test the number of combinations of features to test begins to approach 10,000 different combinations of settings. The function tests out transformations, differencing, and variations of the lags, seasonal lags, and fourier pairs. The paramter space is broken up into various buckets that are increasing in sophistication. The bandit algorithm samples from those buckets and based on many rounds of testing it determines which buckets to generate samples from more frequently based on the models performance coming from that bucket. All of the models have performance data collected on them and a final rebuild is initiated when a winner is found. The rebuild process begins by retraining the model with the settings that produced the best performance. If the model fails to build, for whatever reason, the next best buildable model is rebuilt.
AutoETS(
data,
FilePath = NULL,
TargetVariableName,
DateColumnName,
TimeAggLevel = "week",
EvaluationMetric = "MAE",
NumHoldOutPeriods = 5L,
NumFCPeriods = 5L,
TrainWeighting = 0.5,
MaxConsecutiveFails = 12L,
MaxNumberModels = 100L,
MaxRunTimeMinutes = 10L,
NumberCores = max(1L, min(4L, parallel::detectCores() - 2L))
)
Source data.table
NULL to return nothing. Provide a file path to save the model and xregs if available
Name of your time series target variable
Name of your date column
Choose from "year", "quarter", "month", "week", "day", "hour"
Choose from MAE, MSE, and MAPE
Number of time periods to use in the out of sample testing
Number of periods to forecast
Model ranking is based on a weighted average of training metrics and out of sample metrics. Supply the weight of the training metrics, such as 0.50 for 50 percent.
When a new best model is found MaxConsecutiveFails resets to zero. Indicated the number of model attemps without a new winner before terminating the procedure.
Indicate the maximum number of models to test.
Indicate the maximum number of minutes to wait for a result.
Default max(1L, min(4L, parallel::detectCores()-2L))
Other Automated Time Series:
AutoArfima()
,
AutoBanditNNet()
,
AutoBanditSarima()
,
AutoTBATS()
,
AutoTS()
# NOT RUN {
# Create fake data
data <- RemixAutoML::FakeDataGenerator(TimeSeries = TRUE, TimeSeriesTimeAgg = "days")
# Build model
Output <- RemixAutoML::AutoETS(
data,
FilePath = NULL,
TargetVariableName = "Weekly_Sales",
DateColumnName = "Date",
TimeAggLevel = "weeks",
EvaluationMetric = "MAE",
NumHoldOutPeriods = 5L,
NumFCPeriods = 5L,
TrainWeighting = 0.50,
MaxConsecutiveFails = 12L,
MaxNumberModels = 100L,
MaxRunTimeMinutes = 10L,
NumberCores = max(1L, min(4L, parallel::detectCores()-2L)))
# Output
Output$ForecastPlot
Output$Forecast
Output$PerformanceGrid
# }
Run the code above in your browser using DataLab