Learn R Programming

depower (version 2026.1.30)

eval_power_ci: Evaluate confidence intervals for power estimates

Description

Calculates the confidence interval for a power estimate from a simulation study. The confidence interval quantifies uncertainty about the true power parameter.

When the number of simulations used to calculate a test's power is too small, the power estimate will have high uncertainty (wide confidence/prediction intervals). When the number of simulations used to calculate a test's power is too large, computational time may be prohibitive. This function allows you to determine the appropriate number of simulated datasets to reach your desired precision for power before spending computational time on simulations.

Usage

eval_power_ci(power, nsims, ci_level = 0.95, method = c("wilson", "exact"))

Value

A list with elements:

NameDescription
lowerLower bound of confidence interval.
upperUpper bound of confidence interval.

Arguments

power

(numeric: (0, 1))
Hypothetical observed power value(s).

nsims

(integer: [2, Inf))
Number of simulations.

ci_level

(Scalar numeric: 0.95; (0,1))
The confidence level.

method

(Scalar character: "wilson"; c("wilson", "exact"))
Method for computing confidence intervals. One of "wilson" (default) or "exact". See 'Details' for more information.

Details

Power estimation via simulation is a binomial proportion problem. The confidence interval answers: "What is the plausible range of true power values given my simulation results?"

Let \(\pi\) denote the hypothetical true power value, \(\hat{\pi} = x/n\) denote the hypothetical observed power value, \(n\) denote the number of simulations, and \(x = \text{round}(\hat{\pi} \cdot n)\) denote the number of rejections. Two methods are available.

Wilson Score Interval

The Wilson score interval is derived from inverting the score test. Starting with the inequality

$$ \left| \frac{\hat{\pi}-\pi}{\sqrt{\pi(1-\pi)/n}} \right| \le z_{1-\alpha/2}, $$

and solving the resulting quadratic for \(\pi\) yields

$$ \frac{\hat{\pi}+\frac{z^2}{2n} \pm z \sqrt{\frac{\hat{\pi}(1-\hat{\pi})}{n}+\frac{z^2}{4n^2}}}{1+\frac{z^2}{n}}, $$

with \(z = z_{1-\alpha/2}\) and \(\hat{\pi} = x/n\).

Clopper-Pearson Interval

The Clopper-Pearson exact interval inverts the binomial test via Beta quantiles. The bounds \((\pi_L, \pi_U)\) satisfy:

$$P(X \geq x \mid \pi = \pi_L) = \alpha/2$$ $$P(X \leq x \mid \pi = \pi_U) = \alpha/2$$

With \(x\) successes in \(n\) trials,

$$\pi_L = B^{-1}\left(\frac{\alpha}{2}; x, n-x+1\right)$$ $$\pi_U = B^{-1}\left(1-\frac{\alpha}{2}; x+1, n-x\right)$$

where \(B^{-1}(q; a, b)\) is the \(q\)-th quantile of \(\text{Beta}(a, b)\).

This method guarantees at least nominal coverage but is conservative (intervals are wider than necessary).

Approximate parametric tests

When power is computed using approximate parametric tests (see simulated()), the power estimate and confidence/prediction intervals apply to the Monte Carlo test power \(\mu_K = P(\hat{p} \leq \alpha)\) rather than the exact test power \(\pi = P(p \leq \alpha)\). These quantities converge as the number of datasets simulated under the null hypothesis \(K\) increases. The minimum observable p-value is \(1/(K+1)\), so \(K > 1/\alpha - 1\) is required to observe any rejections. For practical accuracy, we recommend choosing \(\text{max}(5000, K \gg 1/\alpha - 1)\) for most scenarios. For example, if \(\alpha = 0.05\), use simulated(nsims = 5000).

References

newcombe_1998depower,

wilson_1927depower,

clopper_1934depower

See Also

add_power_ci(), eval_power_pi()

Examples

Run this code
#----------------------------------------------------------------------------
# eval_power_ci() examples
#----------------------------------------------------------------------------
library(depower)

# Expected CI for 80% power with 1000 simulations
eval_power_ci(power = 0.80, nsims = 1000)

# Compare precision across different simulation counts
eval_power_ci(power = 0.80, nsims = c(100, 500, 1000, 5000))

# Compare Wilson vs exact method
eval_power_ci(power = 0.80, nsims = 1000, method = "wilson")
eval_power_ci(power = 0.80, nsims = 1000, method = "exact")

# Vectorized over power values
eval_power_ci(power = c(0.70, 0.80, 0.90), nsims = 1000)

# 99% confidence interval
eval_power_ci(power = 0.80, nsims = 1000, ci_level = 0.99)

Run the code above in your browser using DataLab