Call grimmer_map()
to GRIMMER-test any number of combinations
of mean, standard deviation, sample size, and number of items. Mapping
function for GRIMMER testing.
For summary statistics, call audit()
on the results. Visualize results
using grim_plot()
, as with GRIM results.
grimmer_map(
data,
items = 1,
merge_items = TRUE,
x = NULL,
sd = NULL,
n = NULL,
show_reason = TRUE,
rounding = "up_or_down",
threshold = 5,
symmetric = FALSE,
tolerance = .Machine$double.eps^0.5
)
A tibble with these columns --
x
, sd
, n
: the inputs.
consistency
: GRIMMER consistency of x
, n
, and items
.
reason
: If consistent, "Passed all"
. If inconsistent, it says which
test was failed (see below).
<extra>
: any columns from data
other than x
, n
, and items
.
The reason
columns refers to GRIM and the three GRIMMER tests (Allard
2018). Briefly, these are:
The reconstructed sum of squared observations must be a whole number.
The reconstructed SD must match the reported one.
The parity of the reconstructed sum of squared observations must match the parity of the reconstructed sum of integers of which the reported means are fractions; i.e., either both are even or both are odd.
The tibble has the scr_grimmer_map
class, which is recognized by the
audit()
generic. It also has the scr_grim_map
class, so it can be
visualized by grim_plot()
.
Data frame with columns x
, sd
, n
, and optionally items
(see documentation for grim()
). Any other columns in data
will be
returned alongside GRIMMER test results.
Integer. If there is no items
column in data
, this specifies
the number of items composing the x
and sd
values. Default is 1
, the
most common case.
Logical. If TRUE
(the default), there will be no items
column in the output. Instead, values from an items
column or argument
will be multiplied with values in the n
column. This is only for
presentation and does not affect test results.
Optionally, specify these arguments as column names in data
.
Logical (length 1). Should there be a reason
column that
shows the reasons for inconsistencies and "Passed all"
for consistent
values? Default is FALSE
. See below for reference.
Further parameters of
GRIMMER testing; see documentation for grimmer()
.
There is an S3 method for audit()
,
so you can call audit()
following grimmer_map()
to get a summary of
grimmer_map()
's results. It is a tibble with a single row and these
columns --
incons_cases
: number of GRIMMER-inconsistent value sets.
all_cases
: total number of value sets.
incons_rate
: proportion of GRIMMER-inconsistent value sets.
fail_grim
: number of value sets that fail the GRIM test.
fail_test1
: number of value sets that fail the first GRIMMER test (see
below).
fail_test2
: number of value sets that fail the second GRIMMER test.
fail_test3
: number of value sets that fail the third GRIMMER test.
The reason
columns refers to the three GRIMMER tests (see Allard 2018).
These are:
The reconstructed sum of squared observations must be a whole number.
The reconstructed SD must match the reported one.
The parity of the reconstructed sum of squared observations must match the parity of the reconstructed sum of integers of which the reported means are fractions; i.e., either both are even or both are odd.
Allard, A. (2018). Analytic-GRIMMER: a new way of testing the possibility of standard deviations. https://aurelienallard.netlify.app/post/anaytic-grimmer-possibility-standard-deviations/
Anaya, J. (2016). The GRIMMER test: A method for testing the validity of reported measures of variability. PeerJ Preprints. https://peerj.com/preprints/2400v1/
# Use `grimmer_map()` on data like these:
pigs5
# The `consistency` column shows whether
# the values to its left are GRIMMER-consistent.
# If they aren't, the `reason` column says why:
pigs5 %>%
grimmer_map()
# Get summaries with `audit()`:
pigs5 %>%
grimmer_map() %>%
audit()
Run the code above in your browser using DataLab