This function calculates the ‘empiric’ power of 2-stage BE studies according to Potvin et al. ‘method B/C’ via simulations. The Potvin methods are modified to include a futility criterion for the point estimate or for its 90%CI and to allow the sample size estimation step to be done with the point estimate (PE) and MSE of .
power.tsd.fC(method = c("B", "C", "B0"), alpha0 = 0.05, alpha = c(0.0294, 0.0294),
n1, CV, GMR, targetpower = 0.8, pmethod = c("nct", "exact", "shifted"),
usePE = FALSE, powerstep = TRUE, min.n2=0, max.n=Inf,
fCrit=c("CI", "PE"), fClower, fCupper, theta0, theta1, theta2,
npct = c(0.05, 0.5, 0.95), nsims, setseed = TRUE, details = FALSE)
Decision schemes according to Potvin et.al. (defaults to "B"
).
Montague<U+2019>s ‘Method D’ can be obtained by choosing "C"
but setting
alpha=c(0.028, 0.028)
.
‘Method E’ of Xu et al. can be obtained by choosing "B"
and setting alphas,
futility criterion "CI"
, max.n
, and n1
according to the reference.
‘Method F’ can be obtained choosing "C"
with the appropriate design setting
according to the reference.
method="B0"
uses the decision scheme of Zheng et al. MSDBE
(modified sequential design for BE studies) which differs from B in case
of different alphas w.r.t. power monitoring and BE decision in case of power
>= target power.
Alpha value for the first step(s) in Potvin "C"
, the power inspection
and BE decision if power > targetpower. Defaults to 0.05.
Only observed if method="C"
Vector (two elements) of the nominal alphas for the two stages. Defaults to
Pocock<U+2019>s setting alpha=c(0.0294, 0.0294)
.
Common values together with other arguments are:
rep(0.0294, 2)
: Potvin et al. ‘Method B’ (fCrit="CI", fCupper=Inf)
rep(0.0269, 2)
: Fulgsang ‘Method C/D’ (method="C", GMR=0.9, targetpower=0.9, fCrit="CI", fCupper=Inf)
rep(0.0274, 2)
: Fuglsang ‘Method C/D’ (method="C", targetpower=0.9, fCrit="CI", fCupper=Inf)
rep(0.0280, 2)
: Montague et al. ‘Method D’ (method="C", GMR=0.9, fCrit="CI", fCupper=Inf)
rep(0.0284, 2)
: Fulgsang ‘Method B’ (GMR=0.9, targetpower=0.9, fCrit="CI", fCupper=Inf)
rep(0.0304, 2)
: Kieser & Rauch (fCrit="CI", fCupper=Inf)
c(0.01, 0.04)
: Zheng et al. ‘MSDBE’ (method="B0", fCrit="CI", fCupper=Inf)
c(0.0249, 0.0357)
: Xu et al. ‘Method E’ for CV 10--30% (fCrit="CI", fClower=0.9374, max.n=42)
c(0.0254, 0.0363)
: Xu et al. ‘Method E’ for CV 30--55% (fCrit="CI", fClower=0.9305, max.n=42)
c(0.0248, 0.0364)
: Xu et al. ‘Method F’ for CV 10--30% (method="C", fCrit="CI", fClower=0.9492, max.n=180)
c(0.0259, 0.0349)
: Xu et al. ‘Method F’ for CV 30--55% (method="C", fCrit="CI", fClower=0.9305, max.n=180)
Sample size of . For Xu<U+2019>s methods the recommended sample size should be at least 18 (if CV 10--30%) or 48 (if CV 30--55%).
Coefficient of variation of the intra-subject variability (use e.g., 0.3 for 30%).
Ratio T/R to be used in decision scheme (power calculations in and sample size estimation for ).
Power threshold in the power monitoring steps and power to achieve in the sample size estimation step.
Power calculation method, also to be used in the sample size estimation for
.
Implemented are "nct"
(approximate calculations via non-central
t-distribution, "exact"
(exact calculations via Owen<U+2019>s Q),
and "shifted"
(approximate calculation via shifted central t-distribution
like in the paper of Potvin et al.
Defaults to "nct"
as a reasonable compromise between speed and
accuracy in the sample size estimation step.
If TRUE
the sample size estimation step is done with MSE
and PE of .
Defaults to FALSE
, i.e., the sample size is estimated with anticipated
(fixed) GMR
given as argument and MSE of
(analogous to Potvin et. al.).
If TRUE
(the default) the interim power monitoring step in
evaluation of ‘method B’
will be done as described in Potvin et.al.
Setting this argument to FALSE
will omit this step.
Has no effect if method="C"
is choosen.
Minimum sample size of . Defaults to zero.
If the sample size estimation step gives N < n1
the sample size for
will be forced to min.n2
,
i.e., the total sample size to n1+min.n2
.
If max.n
is set to a finite value the re-estimated total sample size (N) is
set to min(max.n, N)
.
Defaults to Inf
which is equivalent to not constrain the re-estimated sample size.
Attention! max.n
here is not a futility criterion like Nmax
in other functions of this package.
Futility criterion.
If set to "PE"
the study stops after
if not BE and if the point
estimate (PE) of evaluation
is outside the range defined in the next two arguments "fClower"
and
"fCupper"
.
If set to "CI"
the study stops after
if not BE and if the confidence interval of
evaluation is outside the range
defined in the next two arguments.
Defaults to "PE"
.
Futility criterion to use for PE
or CI
.
Lower futility limit for the PE
or CI
of
.
If the PE
or CI
is outside fClower
… fCupper
the study is stopped in the interim with the result FAIL (not BE).
May be missing. Defaults then to 0.8 if fCrit="PE"
or 0.925 if
fCrit="CI"
.
Upper futility limit for the PE
or CI
of
.
Will be set to 1/fClower
if missing.
Assumed ratio of geometric means (T/R) for simulations. If missing,
defaults to GMR
.
Lower bioequivalence limit. Defaults to 0.8.
Upper bioequivalence limit. Defaults to 1.25.
Percentiles to be used for the presentation of the distribution of
n(total)=n1+n2
.
Defaults to c(0.05, 0.5, 0.95)
to obtain the 5% and 95% percentiles
and the median.
Number of studies to simulate.
If missing, nsims
is set to 1E+05 = 100,000 or to 1E+06 = 1 Mio if
estimating the empiric Type I Error ('alpha'
), i.e., with theta0
at
the border or outside the acceptance range theta1
… theta2
.
Simulations are dependent on the starting point of the (pseudo) random number
generator. To avoid differences in power for different runs a
set.seed(1234567)
is issued if setseed=TRUE
, the default.
Set this argument to FALSE
to view the variation in power between
different runs.
If set to TRUE
the function prints the results of time measurements
of the simulation steps. Defaults to FALSE
.
Returns an object of class "pwrtsd"
with all the input arguments and results
as components.
The class "pwrtsd"
has an S3 print method.
The results are in the components:
Fraction of studies found BE.
Fraction of studies found BE in .
Percentage of studies continuing to .
Mean of n(total), aka average total sample size (ASN).
Range (min, max) of n(total).
Percentiles of the distribution of n(total).
Object of class "table"
summarizing the discrete distribution of
n(total) via its distinct values and counts of occurences of these values.
This component is only given back if usePE==FALSE
or
usePE==TRUE & fClower>0 & is.finite(fCupper)
, i.e., a futility range is used.
The calculations follow in principle the simulations as described in Potvin et al. The underlying subject data are assumed to be evaluated after log-transformation. But instead of simulating subject data, the statistics pe1, mse1 and pe2, SS2 are simulated via their associated distributions (normal and distributions).
Potvin D, DiLiberti CE, Hauck WW, Parr AF, Schuirmann DJ, Smith RA. Sequential design approaches for bioequivalence studies with crossover designs. Pharm Stat. 2008; 7(4):245--62. 10.1002/pst.294
Montague TH, Potvin D, DiLiberti CE, Hauck WW, Parr AF, Schuirmann DJ. Additional results for ‘Sequential design approaches for bioequivalence studies with crossover designs’. Pharm Stat. 2011; 11(1):8--13. 10.1002/pst.483
Fuglsang A. Sequential Bioequivalence Trial Designs with Increased Power and Controlled Type I Error Rates. AAPS J. 2013; 15(3):659--61. 10.1208/s12248-013-9475-5
Sch<U+00FC>tz H. Two-stage designs in bioequivalence trials. Eur J Clin Pharmacol. 2015; 71(3):271--81. 10.1007/s00228-015-1806-2
Kieser M, Rauch G. Two-stage designs for cross-over bioequivalence trials. Stat Med. 2015; 34(16):2403--16. 10.1002/sim.6487
Zheng Ch, Zhao L, Wang J. Modifications of sequential designs in bioequivalence trials. Pharm Stat. 2015; 14(3):180--8. 10.1002/pst.1672
Xu J, Audet C, DiLiberti CE, Hauck WW, Montague TH, Parr TH, Potvin D, Schuirmann DJ. Optimal adaptive sequential designs for crossover bioequivalence studies. Pharm Stat. 2016;15(1):15--27. 10.1002/pst.1721
# NOT RUN {
# using all the defaults
power.tsd.fC(CV=0.25, n1=24)
# run-time ~1 sec
# }
# NOT RUN {
# as above but storing the results
res <- power.tsd.fC(CV=0.25, n1=24)
# representation of the discrete distribution of n(total)
# via plot method of object with class "table" which creates a
# 'needle' plot
plot(res$ntable/sum(res$ntable), ylab="Density",
xlab=expression("n"[total]), las=1,
main=expression("Distribution of n"[total]))
# }
Run the code above in your browser using DataLab