incidence()
calculates event the incidence of different events across
specified time periods and groupings.
incidence(
x,
date_index,
groups = NULL,
counts = NULL,
count_names_to = "count_variable",
count_values_to = "count",
date_names_to = "date_index",
rm_na_dates = TRUE,
interval = NULL,
offset = NULL,
...
)
An object of class <incidence2, data.frame>
.
A data frame object representing a linelist or pre-aggregated dataset.
[character]
The time index(es) of the given data.
This should be the name(s) corresponding to the desired date column(s) in x.
A name vector can be used for convenient relabelling of the resultant output.
Multiple indices only make sense when x
is a linelist.
[character]
An optional vector giving the names of the groups of observations for which incidence should be grouped.
[character]
The count variables of the given data. If NULL (default) the data is taken to be a linelist of individual observations.
[character]
The column to create which will store the counts
column names provided that
counts
is not NULL.
[character]
The name of the column to store the resultant count values in.
[character]
The name of the column to store the date variables in.
[logical]
Should NA
dates be removed prior to aggregation?
An optional scalar integer or string indicating the (fixed) size of the desired time interval you wish to use for for computing the incidence.
Defaults to NULL in which case the date_index columns are left unchanged.
Numeric values are coerced to integer and treated as a number of days to group.
Text strings can be one of:
* day or daily
* week(s) or weekly
* epiweek(s)
* isoweek(s)
* month(s) or monthly
* yearmonth(s)
* quarter(s) or quarterly
* yearquarter(s)
* year(s) or yearly
More details can be found in the "Interval specification" section.
Only applicable when interval
is not NULL.
An optional scalar integer or date indicating the value you wish to start counting periods from relative to the Unix Epoch:
Default value of NULL corresponds to 0L.
For other integer values this is stored scaled by n
(offset <- as.integer(offset) %% n
).
For date values this is first converted to an integer offset
(offset <- floor(as.numeric(offset))
) and then scaled via n
as above.
Not currently used.
Where interval
is specified, incidence()
, predominantly uses the
grates
package to generate
appropriate date groupings. The grouping used depends on the value of
interval
. This can be specified as either an integer value or a string
corresponding to one of the classes:
integer values: <grates_period>
object, grouped by the specified number of days.
day, daily: <Date>
objects.
week(s), weekly, isoweek: <grates_isoweek>
objects.
epiweek(s): <grates_epiweek>
objects.
month(s), monthly, yearmonth: <grates_yearmonth>
objects.
quarter(s), quarterly, yearquarter: <grates_yearquarter>
objects.
year(s) and yearly: <grates_year>
objects.
For "day" or "daily" interval, we provide a thin wrapper around as.Date()
that ensures the underlying data are whole numbers and that time zones are
respected. Note that additional arguments are not forwarded to as.Date()
so for greater flexibility users are advised to modifying your input prior to
calling incidence()
.
<incidence2>
objects are a sub class of data frame with some
additional invariants. That is, an <incidence2>
object must:
have one column representing the date index (this does not need to be a
date
object but must have an inherent ordering over time);
have one column representing the count variable (i.e. what is being counted) and one variable representing the associated count;
have zero or more columns representing groups;
not have duplicated rows with regards to the date and group variables.
browseVignettes("grates")
for more details on the grate object classes.
data.table::setDTthreads(2)
if (requireNamespace("outbreaks", quietly = TRUE)) {
withAutoprint({
data(ebola_sim_clean, package = "outbreaks")
dat <- ebola_sim_clean$linelist
incidence(dat, "date_of_onset")
incidence(dat, "date_of_onset", groups = c("gender", "hospital"))
})
}
Run the code above in your browser using DataLab