This function allows automatic scoring of forecasts using a
range of metrics. For most users it will be the workhorse for
scoring forecasts as it wraps the lower level functions package functions.
However, these functions are also available if you wish to make use of them
independently.
A range of forecasts formats are supported, including quantile-based,
sample-based, binary forecasts. Prior to scoring, users may wish to make use
of check_forecasts()
to ensure that the input data is in a supported
format though this will also be run internally by score()
. Examples for
each format are also provided (see the documentation for data
below or in
check_forecasts()
).
Each format has a set of required columns (see below). Additional columns may
be present to indicate a grouping of forecasts. For example, we could have
forecasts made by different models in various locations at different time
points, each for several weeks into the future. It is important, that there
are only columns present which are relevant in order to group forecasts.
A combination of different columns should uniquely define the
unit of a single forecast, meaning that a single forecast is defined by the
values in the other columns. Adding additional unrelated columns may alter
results.
To obtain a quick overview of the currently supported evaluation metrics,
have a look at the metrics data included in the package. The column
metrics$Name
gives an overview of all available metric names that can be
computed. If interested in an unsupported metric please open a feature request or consider
contributing a pull request.
For additional help and examples, check out the Getting Started Vignette.