Learn R Programming

depower (version 2026.1.30)

add_power_ci: Add confidence intervals for power estimates

Description

Calculates and adds confidence intervals for power estimates to objects returned by power(). The confidence interval quantifies uncertainty about the true power parameter.

Usage

add_power_ci(x, ci_level = 0.95, method = c("wilson", "exact"))

Value

The input data frame with additional columns:

NameDescription
power_ci_lowerLower bound of confidence interval.
power_ci_upperUpper bound of confidence interval.

and added attribute "ci_info" containing the method description, method name and confidence level.

Arguments

x

(data.frame)
A data frame returned by power(), containing columns power and nsims.

ci_level

(Scalar numeric: 0.95; (0,1))
The confidence interval level.

method

(Scalar character: "wilson"; c("wilson", "exact"))
Method for computing confidence intervals. One of "wilson" (default) or "exact".

Details

Power estimation via simulation is a binomial proportion problem. The confidence interval answers: "What is the plausible range of true power values given my simulation results?"

Let \(\pi\) denote the true power value, \(\hat{\pi} = x/n\) denote the observed power value, \(n\) denote the number of simulations, and \(x = \text{round}(\hat{\pi} \cdot n)\) denote the number of rejections. Two methods are available.

Wilson Score Interval

The Wilson score interval is derived from inverting the score test. Starting with the inequality

$$ \left| \frac{\hat{\pi}-\pi}{\sqrt{\pi(1-\pi)/n}} \right| \le z_{1-\alpha/2}, $$

and solving the resulting quadratic for \(\pi\) yields

$$ \frac{\hat{\pi}+\frac{z^2}{2n} \pm z \sqrt{\frac{\hat{\pi}(1-\hat{\pi})}{n}+\frac{z^2}{4n^2}}}{1+\frac{z^2}{n}}, $$

with \(z = z_{1-\alpha/2}\) and \(\hat{\pi} = x/n\).

Clopper-Pearson Interval

The Clopper-Pearson exact interval inverts the binomial test via Beta quantiles. The bounds \((\pi_L, \pi_U)\) satisfy:

$$P(X \geq x \mid \pi = \pi_L) = \alpha/2$$ $$P(X \leq x \mid \pi = \pi_U) = \alpha/2$$

With \(x\) successes in \(n\) trials,

$$\pi_L = B^{-1}\left(\frac{\alpha}{2}; x, n-x+1\right)$$ $$\pi_U = B^{-1}\left(1-\frac{\alpha}{2}; x+1, n-x\right)$$

where \(B^{-1}(q; a, b)\) is the \(q\)-th quantile of \(\text{Beta}(a, b)\).

This method guarantees at least nominal coverage but is conservative (intervals are wider than necessary).

Approximate parametric tests

When power is computed using approximate parametric tests (see simulated()), the power estimate and confidence/prediction intervals apply to the Monte Carlo test power \(\mu_K = P(\hat{p} \leq \alpha)\) rather than the exact test power \(\pi = P(p \leq \alpha)\). These quantities converge as the number of datasets simulated under the null hypothesis \(K\) increases. The minimum observable p-value is \(1/(K+1)\), so \(K > 1/\alpha - 1\) is required to observe any rejections. For practical accuracy, we recommend choosing \(\text{max}(5000, K \gg 1/\alpha - 1)\) for most scenarios. For example, if \(\alpha = 0.05\), use simulated(nsims = 5000).

References

newcombe_1998depower,

wilson_1927depower,

clopper_1934depower

See Also

power(), eval_power_ci(), add_power_pi()

Examples

Run this code
#----------------------------------------------------------------------------
# add_power_ci() examples
#----------------------------------------------------------------------------
library(depower)

set.seed(1234)
x <- sim_nb(
  n1 = 10,
  mean1 = 10,
  ratio = c(1.4, 1.6),
  dispersion1 = 2,
  nsims = 200
) |>
  power(wald_test_nb())

# Compare methods
add_power_ci(x, method = "wilson")
add_power_ci(x, method = "exact")

# 99% confidence interval
add_power_ci(x, ci_level = 0.99)

# Plot with shaded region for confidence interval of the power estimate.
add_power_ci(x) |>
  plot()

Run the code above in your browser using DataLab