The BayesPPD (Bayesian Power Prior Design) package provides two categories of functions: functions for Bayesian power/type I error calculation and functions for model fitting. Supported distributions include normal, binary (Bernoulli/binomial), Poisson and exponential. The power parameter \(a_0\) can be fixed or modeled as random using a normalized power prior.
Following Chen et al.(2011), for two group models (i.e., treatment and control group with no covariates), denote the parameter for the treatment group by \(\mu_t\)
and the parameter for the control group by \(\mu_c\). Suppose there are \(K\) historical datasets \(D_0 = (D_{01},\cdots, D_{0K})'\). We consider the following normalized power prior
for \(\mu_c\) given multiple historical datasets \(D_0\)
$$\pi(\mu_c|D_0,a_0) = \frac{1}{C(a_0)}\prod_{k=1}^K \left[L(\mu_c|D_{0k})^{a_{0k}}\right]\pi_0(\mu_c)$$
where \(a_0 = (a_{01},\cdots,a_{0K})'\), \(0\le a_{0k} \le 1\) for \(k=1,\cdots,K\), \(L(\mu_c|D_{0k})\) is the historical data likelihood,
\(\pi_0(\mu_c)\) is an initial prior, and \(C(a_0)=\int \prod_{k=1}^K [L(\mu_c|D_{0k})^{a_{0k}}]\pi_0(\mu_c)d\mu_c\). When \(a_0\) is fixed,
the normalized power prior is equivalent to the power prior
$$\pi(\mu_c|D_0,a_0) = \prod_{k=1}^K \left[L(\mu_c|D_{0k})^{a_{0k}}\right]\pi_0(\mu_c).$$
By default, the power/type I error calculation algorithm assumes the null and alternative hypotheses are given by
$$H_0: \mu_t - \mu_c \ge \delta$$ and $$H_1: \mu_t - \mu_c < \delta,$$ where \(\delta\) is a prespecified constant. To test hypotheses of
the opposite direction, i.e., \(H_0: \mu_t - \mu_c \le \delta\) and \(H_1: \mu_t - \mu_c > \delta\) , one can set the parameter nullspace.ineq to "<".
To determine Bayesian sample size, we estimate the quantity $$\beta_{sj}^{(n)}=E_s[I\{P(\mu_t-\mu_c<\delta|y^{(n)}, \pi^{(f)})\ge \gamma\}]$$
where \(\gamma > 0\) is a prespecified posterior probability threshold for rejecting the null hypothesis (e.g., \(0.975\)), the probability is computed with respect to the posterior distribution given the data
\(y^{(n)}\) and the fitting prior \(\pi^{(f)}\), and the expectation is taken with respect to the marginal distribution of \(y^{(n)}\)
defined based on the sampling prior \(\pi^{(s)}(\theta)\), where \(\theta=(\mu_t, \mu_c, \eta)\) and \(\eta\) denotes any nuisance parameter in the model.
Let \(\Theta_0\) and \(\Theta_1\) denote the parameter spaces corresponding to \(H_0\) and \(H_1\).
Let \(\pi_0^{(s)}(\theta)\) denote a sampling prior that puts mass in the null region, i.e., \(\theta \subset \Theta_0\).
Let \(\pi_1^{(s)}(\theta)\) denote a sampling prior that puts mass in the alternative region, i.e., \(\theta \subset \Theta_1\).
Then \(\beta_{s0}^{(n)}\) corresponding to \(\pi^{(s)}(\theta)=\pi_0^{(s)}(\theta)\) is a Bayesian type I error,
while \(\beta_{s1}^{(n)}\) corresponding to \(\pi^{(s)}(\theta)=\pi_1^{(s)}(\theta)\) is a Bayesian power.
We compute \(n_{\alpha_0} = \min\{n: \beta_{s0}^{(n)} \le \alpha_0\}\) and \(n_{\alpha_1} = \min\{n: \beta_{s1}^{(n)} \ge 1-\alpha_1\}\).
Then Bayesian sample size is max\(\{n_{\alpha_0}, n_{\alpha_1}\}\). Choosing \(\alpha_0=0.05\) and \(\alpha_1=0.2\)
guarantees that the Bayesian type I error rate is at most \(0.05\) and the Bayesian power is at least \(0.8\).
To compute \(\beta_{sj}^{(n)}\), the following algorithm is used:
Generate \(\theta \sim \pi_j^{(s)}(\theta)\)
Generate \(y^{(n)} \sim f(y^{(n)}|\theta)\)
Compute \(P(\mu_t < \mu_c + \delta|y^{(n)}, \pi^{(f)})\)
Check whether \(P(\mu_t < \mu_c + \delta|y^{(n)}, \pi^{(f)}) \ge \gamma\)
Repeat Steps 1-4 \(N\) times
Compute the proportion of times that \(\{\mu_t < \mu_c + \delta|y^{(n)}, \pi^{(f)} \ge \gamma\}\) is true out of the \(N\) simulated datasets, which gives an estimate of \(\beta_{sj}^{(n)}\).
For positive continuous data assumed to follow exponential distribution, the hypotheses are given by $$H_0: \mu_t/\mu_c \ge \delta$$ and $$H_1: \mu_t/\mu_c < \delta,$$ where \(\mu_t\) and \(\mu_c\) are the hazards for the treatment and the control group, respectively. The definition of \(\beta_{sj}^{(n)}\) and the algorithm change accordingly.
If there are covariates to adjust for, we assume the first column of the covariate matrix is the treatment indicator, and the corresponding parameter is \(\beta_1\), which, for example, corresponds to a difference in means for the linear regression model and a log hazard ratio for the exponential regression model. The hypotheses are given by $$H_0: \beta_1 \ge \delta$$ and $$H_1: \beta_1 < \delta.$$ The definition of \(\beta_{sj}^{(n)}\) and the algorithm change accordingly.
By default, the package assumes the historical data is
composed of control group subjects only. If the user wants to use historical data to inform treatment effect, one can set borrow.treat=TRUE
and include the treatment indicator in the historical covariate matrix.
This implementation of the method does not assume any particular distribution for the sampling priors. The user is allowed to specify a vector or matrix of samples for \(\theta\) (matrix if \(\theta\) is of dimension >1) from any distribution, and the algorithm samples with replacement from the vector or matrix at each iteration of data simulation. In order to accurately approximate a joint distribution for multiple parameters, the number of iterations should be large (e.g., 10,000).
Gibbs sampling is used for normally distributed data. Slice sampling is used for all other data distributions. For two group models with fixed \(a_0\), numerical integration using the RcppNumerical package is used.
Chen, Ming-Hui, et al. "Bayesian design of noninferiority trials for medical devices using historical data." Biometrics 67.3 (2011): 1163-1170.