The default asymptotic test is performed for distribution = asymptotic().
When setting argument distribution = simulated(method = "exact"), the
exact randomization test is defined by:
For argument distribution = simulated(method = "approximate"), the
approximate randomization test is defined by:
In the power analysis setting, power(), we can simulate data for
groups 1 and 2 using their known distributions under the assumptions of the
null hypothesis. Unlike above where nonparametric randomization tests
are performed, in this setting approximate parametric tests are performed.
For example, power(wald_test_nb(distribution = simulated())) would result
in an approximate parametric Wald test defined by:
For each relevant design row in data:
For simulated(nsims=integer()) iterations:
Simulate new data for group 1 and group 2 under the null hypothesis.
Calculate the Wald test statistic, \(\chi^2_{null}\).
Collect all \(\chi^2_{null}\) into a vector.
For each of the sim_nb(nsims=integer()) simulated datasets:
Calculate the Wald test statistic, \(\chi^2_{obs}\).
Calculate the p-value based on the empirical null distribution of test statistics, \(\chi^2_{null}\).
(the mean of the logical vector null_test_stats >= observed_test_stat)
Collect all p-values into a vector.
Calculate power as sum(p <= alpha) / nsims.
Return all results from power().
Randomization tests use the positive-biased p-value estimate in the style of
davison_1997;textualdepower
(see also phipson_2010;textualdepower):
$$
\hat{p} = \frac{1 + \sum_{i=1}^B \mathbb{I} \{\chi^2_i \geq \chi^2_{obs}\}}{B + 1}.
$$
The number of resamples defines the minimum observable p-value
(e.g. nsims=1000L results in min(p-value)=1/1001).
It's recommended to set \(\text{nsims} \gg \frac{1}{\alpha}\).