Learn R Programming

depower (version 2026.1.30)

eval_power_pi: Evaluate Bayesian posterior predictive intervals for power estimates

Description

Calculates the Bayesian posterior predictive interval for a power estimate from a simulation study. The posterior predictive interval quantifies the expected range of power estimates from a future simulation study.

When the number of simulations used to calculate a test's power is too small, the power estimate will have high uncertainty (wide confidence/prediction intervals). When the number of simulations used to calculate a test's power is too large, computational time may be prohibitive. This function allows you to determine the appropriate number of simulated datasets to reach your desired precision for power before spending computational time on simulations.

Usage

eval_power_pi(
  power,
  nsims,
  future_nsims = NULL,
  pi_level = 0.95,
  prior = c(1, 1)
)

Value

A list with elements:

NameDescription
meanPredictive mean of future power estimate.
lowerLower bound of posterior predictive interval.
upperUpper bound of posterior predictive interval.

Arguments

power

(numeric: (0, 1))
Hypothetical power value(s).

nsims

(integer: [2, Inf))
Number of simulations.

future_nsims

(integer or NULL: NULL; [2, Inf))
Number of simulations in the future study. If NULL (default), uses the same number as nsims.

pi_level

(Scalar numeric: 0.95; (0,1))
The posterior predictive interval level.

prior

(numeric vector of length 2: c(1, 1); each (0, Inf))
Parameters \((\alpha, \beta)\) for the Beta prior on true power. Default c(1, 1) is the uniform prior. Use c(0.5, 0.5) for the Jeffreys prior.

Details

Power estimation via simulation is a binomial proportion problem. The posterior predictive interval answers: "If I run a new simulation study with \(m\) simulations, what range of power estimates might I observe?"

Let \(\pi\) denote the hypothetical true power value, \(\hat{\pi} = x/n\) denote the hypothetical observed power value, \(n\) denote the number of simulations, and \(x = \text{round}(\hat{\pi} \cdot n)\) denote the number of rejections. With a \(\text{Beta}(\alpha, \beta)\) prior on the true power \(\pi\), the posterior after observing \(x\) successes in \(n\) trials is:

$$ \pi \mid X = x \sim \text{Beta}(\alpha + x, \beta + n - x). $$

The posterior predictive distribution for \(Y\), the number of successes in a future study with \(m\) trials, is Beta-Binomial:

$$ Y \mid X = x \sim \text{BetaBinomial}(m, \alpha + x, \beta + n - x). $$

The posterior predictive interval is constructed from quantiles of this distribution, expressed as proportions \(Y/m\).

The posterior predictive mean and variance of \(\hat{\pi}_{\text{new}} = Y/m\) are: $$ \begin{aligned} E[\hat{\pi}_{\text{new}} \mid X = x] &= \frac{\alpha + x}{\alpha + \beta + n} \\ \text{Var}[\hat{\pi}_{\text{new}} \mid X = x] &= \frac {(\alpha + x)(\beta + n - x)(\alpha + \beta + n + m)} {m (\alpha + \beta + n)^{2} (\alpha + \beta + n + 1)}. \end{aligned} $$

Argument future_nsims

The argument future_nsims allows you to estimate prediction interval bounds for a hypothetical future study with different number of simulations. Note that a small initial number for nsims results in substantial uncertainty about the true power. A correspondingly large number of future simulations future_nsims will more precisely estimate the true power, but the past large uncertainty is still carried forward. Therefore you still need an adequate number of simulations nsims in the original study, not just more in the replication future_nsims, to ensure narrow prediction intervals.

Approximate parametric tests

When power is computed using approximate parametric tests (see simulated()), the power estimate and confidence/prediction intervals apply to the Monte Carlo test power \(\mu_K = P(\hat{p} \leq \alpha)\) rather than the exact test power \(\pi = P(p \leq \alpha)\). These quantities converge as the number of datasets simulated under the null hypothesis \(K\) increases. The minimum observable p-value is \(1/(K+1)\), so \(K > 1/\alpha - 1\) is required to observe any rejections. For practical accuracy, we recommend choosing \(\text{max}(5000, K \gg 1/\alpha - 1)\) for most scenarios. For example, if \(\alpha = 0.05\), use simulated(nsims = 5000).

References

gelman_2013depower

See Also

add_power_pi(), eval_power_ci()

Examples

Run this code
#----------------------------------------------------------------------------
# eval_power_pi() examples
#----------------------------------------------------------------------------
library(depower)

# Expected PI for 80% power with 1000 simulations
eval_power_pi(power = 0.80, nsims = 1000)

# Compare precision across different simulation counts
eval_power_pi(power = 0.80, nsims = c(100, 500, 1000, 5000))

# Predict for a larger future study (narrower interval)
eval_power_pi(power = 0.80, nsims = 1000, future_nsims = 5000)

# Predict for a smaller future study (wider interval)
eval_power_pi(power = 0.80, nsims = 1000, future_nsims = 100)

# Vectorized over power values
eval_power_pi(power = c(0.70, 0.80, 0.90), nsims = 1000)

# Use Jeffreys prior instead of uniform
eval_power_pi(power = 0.80, nsims = 1000, prior = c(0.5, 0.5))

# 99% predictive interval
eval_power_pi(power = 0.80, nsims = 1000, pi_level = 0.99)

Run the code above in your browser using DataLab