Derives power in a two arm clinical trial under a group sequential design. Allows for arbitrary number of interim analyses, arbitrary specification of arm-0/arm-1 time to event distributions (via survival or hazard), arm-0/arm-1 censoring distribution, provisions for two types of continuous time non-compliance according to arm-0/arm-1 rate followed by switch to new hazard rate. Allows for analyses using (I) weighted log-rank statistic, with weighting function (a) a member of the Flemming-Harrington G-Rho class, or (b) a stopped version thereof, or (c) the ramp-plateau deterministic weights, or (II) the integrated survival distance (currently under method=="S" without futility only). Stopping boundaries are computed via the Lan-Demets method, Haybittle method, converted from the stochastic curtailment procedure, or be completely specified by the user. The Lan-Demets boundaries can be constructed usign either O'Brien-Flemming, Pocock or Wang-Tsiatis power alpha-spending. The C kernel is readily extensible, and further options will become available in the near future.
PwrGSD(EfficacyBoundary = LanDemets(alpha = 0.05, spending = ObrienFleming),
FutilityBoundary = LanDemets(alpha = 0.1, spending = ObrienFleming),
NonBindingFutility = TRUE, sided = c("2>", "2<", "1="">", "1",>
This specifies the method used to construct the efficacy boundary. The available choices are:
(i) Lan-Demets
(alpha
=<total type I error>, spending
=<spending function>). The Lan-Demets method is based upon a error probability
spending approach. The spending function can be set to ObrienFleming
,
Pocock
, or Power(rho)
, where rho
is the the power argument for
the power spending function: rho=3 is roughly equivalent to the O'Brien-Fleming
spending function and smaller powers result in a less conservative spending function.
(ii) Haybittle
(alpha
=<total type I error>,
b.Haybittle
=<user specified boundary point>). The Haybittle approach is
conceptually the simplest of all methods for efficacy boundary construction. However,
as it spends nearly no alpha until the end, is for all practical purposes equivalent
to a single analysis design and to be considered overly conservative. This method sets
all the boundary points equal to b.Haybittle
, a user specified value (try 3)
for all analyses except the last, which is calculated so as to result in the total
type I error, set with the argument alpha
.
(iii) SC
(be.end
=<efficacy boundary point at trial end>,
prob
=<threshold for conditional type I error for efficacy stopping>).
The stochastic curtailment method is based upon the conditional probability of type I
error given the current value of the statistic. Under this method, a sequence of
boundary points on the standard normal scale (as are boundary points under all other
methods) is calculated so that the total probability of type I error is maintained.
This is done by considering the joint probabilities of continuing to the current
analysis and then exceeding the threshold at the current analysis. A good value for
the threshold value for the conditional type I error, prob
is 0.90 or greater.
(iv) User supplied boundary points in the form c(b1, b2, b3, ..., b_m)
,
where m
is the number of looks.
This specifies the method used to construct the futility boundary. The available choices are:
(i) Lan-Demets
(alpha
=<total type II error>, spending
=
<spending function>). The Lan-Demets method is based upon a error probability spending
approach. The spending function can be set to ObrienFleming
, Pocock
, or
Power(rho)
, where rho
is the the power argument for the power spending
function: rho=3 is roughly equivalent to the O'Brien-Fleming spending function and
smaller powers result in a less conservative spending function.
NOTE: there is no implementation of the Haybittle
method for
futility boundary construction. Given that the futility boundary depends upon
values of the drift function, this method doesn't apply.
(ii) SC
(be.end
=<efficacy boundary point at trial end>,
prob
=<threshold for conditional type II error for futility stopping>,
drift.end
=<projected drift at end of trial>). The stochastic curtailment
method is based upon the conditional probability of type II error given the current
value of the statistic. Under this method, a sequence of boundary points on the
standard normal scale (as are boundary points under all other methods) is calculated
so that the total probability of type II error, is maintained. This is done by
considering the joint probabilities of continuing to the current analysis and then
exceeding the threshold at the current analysis. A good value for the threshold value
for the conditional type I error, prob
is 0.90 or greater.
(iii) User supplied boundary points in the form c(b1, b2, b3, ..., b_m)
,
where m
is the number of looks.
When using a futility boundary and this is set to 'TRUE', the efficacy boundary will be constructed in the absence of the futility boundary, and then the futility boundary will be constructed given the resulting efficacy boundary. This results in a more conservative efficacy boundary with true type I error less than the nominal level. This is recommended due to the fact that futility crossings are viewed by DSMB's with much less gravity than an efficacy crossing and as such, the consensus is that efficacy bounds should not be discounted towards the null hypothesis because of paths which cross a futility boundary. Default value is 'TRUE'.
Set to “2>” (quoted) for two sided tests of the null hypothesis when
a positive drift corresponds to efficacy. Set to “2<” (quoted) for two sided
tests of the null hypothesis when a negative drift corresponds to efficacy. Set to
“1>” or “1<” for one sided tests of H0 when efficacy corresponds to a
positive or negative drift, respectively. If method
==“S” then this must
be of the same length as StatType
because the interpretation of sided
is
different depending upon whether StatType
==“WLR” (negative is benefit)
or StatType
==“ISD” (positive is benefit)
Determines how to calculate the power. Set to “A” (Asymptotic method, the default) or “S” (Simulation method)
The upper endpoint of the accrual period beginning with time 0.
The rate of accrual per unit of time.
The times of planned interim analyses.
Left hand endpoints for intervals upon which the arm-0 specific mortality is constant. The last given component is the left hand endpoint of the interval having right hand endpoint infinity.
A vector of the same length as tcut0
which specifies the piecewise
constant arm-0 mortality rate.
Alternatively, the arm-0 mortality distribution can be supplied via this
argument, in terms of of the corresponding survival function values at the times given
in the vector tcut0
. If s0
is supplied, then h0
is derived
internally, assuming the piecewise exponential distrubiton. If you specify s0
,
the first element must be 1, and correspondingly, the first component of tcut0
will be the lower support point of the distribution. You must supply either h0
or s0
but not both.
Left hand endpoints for intervals upon which the arm-1 specific mortality is constant. The last given component is the left hand endpoint of the interval having right hand endpoint infinity.
A vector of piecewise constant arm-1 versus arm-0 mortality rate ratios.
If tcut1
and tcut0
are not identical, then tcut1
, h0
, and
rhaz
are internally rederived at the union of the sequences tcut0
and
tcut1
. In all cases the arm-1 mortality rate is then derived at the time
cutpoints tcut1
as rhaz
timesh0
.
Alternatively, the arm-1 mortality distribution can be supplied via this argument by specifying the piecewise constant arm-1 mortality rate. See the comments above.
Alternatively, the arm-1 mortality distribution can be supplied via this
argument, in terms of of the corresponding survival function values at the times given
in the vector tcut1
. Comments regarding s0
above apply here as well. You
must supply exactly one of the following: h1
, rhaz
, or s1
.
Left hand endpoints for intervals upon which the arm-0 specific censoring distribution hazard function is constant. The last given component is the left hand endpoint of the interval having right hand endpoint infinity.
A vector of the same length as tcutc0
which specifies the arm-0
censoring distribution in terms of a piecewise constant hazard function.
Alternatively, the arm-0 censoring distribution can be supplied via this
argument, in terms of of the corresponding survival function values at the times
given in the vector tcutc0
. See comments above. You must supply either
hc0
or sc0
but not both.
Left hand endpoints for intervals upon which the arm-1 specific censoring distribution hazard function is constant. The last given component is the left hand endpoint of the interval having right hand endpoint infinity.
A vector of the same length as tcutc1
which specifies the arm-1
censoring distribution in terms of a piecewise constant hazard function.
Alternatively, the arm-1 censoring distribution can be supplied via this
argument, in terms of of the corresponding survival function values at the times given
in the vector tcutc1
. See comments above. You must supply either hc1
or
sc1
but not both.
(i) Seting noncompliance
to “none” for no
non-compliance will automatically set the non-compliance arguments, below, to
appropriate values for no compliance. This requires no additional user specification
of non-compliance parameters. (ii) Setting noncompliance
to “crossover”
will automatically set crossover values in the arm 0/1 specific
post-cause-B-delay-mortality for cross-over, i.e. simple interchange of the arm
0 and arm 1 mortalities. The user is required to specify all parameters corresponding
to the arm 0/1 specific cause-B-delay distributions. The cause-A-delay
and post-cause-A-delay-mortality are automatically set so as not to influence
the calculations. Setting noncompliance
to “mixed” will set the arm 0/1
specific post-cause-B-delay-mortality distributions for crossover as defined
above. The user specifies the arm 0/1 specific cause-B-delay distribution as
above, and in addition, all parameters related to the arm 0/1 specific
cause-A-delay distributions and corresponding arm 0/1 specific
post-cause-A-delay-mortality distributions. (iii) Setting noncompliance
to “user” requires the user to specify all non-compliance distribution
parameters.
Left hand endpoints for intervals upon which the arm-0 specific
cause-A delay distribution hazard function is constant. The last given
component is the left hand endpoint of the interval having right hand endpoint
infinity. Required only when noncompliance
is set to “mixed” or
“user”.
A vector of the same length as tcutd0A
containing peicewise constant
hazard rates for the arm-0 cause-A delay distribution. Required only when
noncompliance
is set to “mixed” or “user”.
When required, the arm-0 cause-A-delay distribution is alternately
specified via a survival function. A vector of the same length as tcutd0A
.
Left hand endpoints for intervals upon which the arm-0 specific
cause-B delay distribution hazard function is constant. The last given
component is the left hand endpoint of the interval having right hand endpoint
infinity. Always required when noncompliance
is set to any value other than
“none”.
A vector of the same length as tcutd0B
containing peicewise constant
hazard rates for the arm-0 cause-B delay distribution. Always required when
noncompliance
is set to any value other than “none”.
When required, the arm-0 cause-B-delay distribution is alternately
specified via a survival function. A vector of the same length as tcutd0B
.
Left hand endpoints for intervals upon which the arm-1 specific
cause-A delay distribution hazard function is constant. The last given
component is the left hand endpoint of the interval having right hand endpoint
infinity. Required only when noncompliance
is set to “mixed” or
“user”.
A vector of the same length as tcutd1A
containing peicewise constant
hazard rates for the arm-1 cause-A delay distribution. Required only when
noncompliance
is set to “mixed” or “user”.
When required, the arm-1 cause-A-delay distribution is alternately
specified via a survival function. A vector of the same length as tcutd1A
.
Left hand endpoints for intervals upon which the arm-1 specific
cause-B delay distribution hazard function is constant. The last given
component is the left hand endpoint of the interval having right hand endpoint
infinity. Always required when noncompliance
is set to any value other than
“none”.
A vector of the same length as tcutd1B
containing peicewise constant
hazard rates for the arm-1 cause-B delay distribution. Always required when
noncompliance
is set to any value other than “none”.
When required, the arm-1 cause-A-delay distribution is alternately
specified via a survival function. A vector of the same length as tcutd1A
.
Left hand endpoints for intervals upon which the arm-0 specific
post-cause-A-delay-mortality rate is constant. The last given component is the
left hand endpoint of the interval having right hand endpoint infinity. Required only
when noncompliance
is set to “mixed” or “user”.
A vector of the same length as tcutx0A
containing the arm-0
post-cause-A-delay mortality rates. Required only when noncompliance
is
set to “mixed” or “user”.
When required, the arm-0 post-cause-A-delay mortality distribution is
alternately specified via a survival function. A vector of the same length as
tcutx0A
.
Left hand endpoints for intervals upon which the arm-0 specific
post-cause-B-delay-mortality rate is constant. The last given component is the
left hand endpoint of the interval having right hand endpoint infinity. Always
required when noncompliance
is set to any value other than “none”.
A vector of the same length as tcutx0B
containing the arm-0
post-cause-B-delay mortality rates. Always required when noncompliance
is set to any value other than “none”.
When required, the arm-0 post-cause-B-delay mortality distribution is
alternately specified via a survival function. A vector of the same length as
tcutx0B
.
Left hand endpoints for intervals upon which the arm-1 specific
post-cause-A-delay-mortality rate is constant. The last given component is the
left hand endpoint of the interval having right hand endpoint infinity. Required only
when noncompliance
is set to “mixed” or “user”.
A vector of the same length as tcutx1A
containing the arm-1
post-cause-A-delay mortality rates. Required only when noncompliance
is
set to “mixed” or “user”.
When required, the arm-1 post-cause-A-delay mortality distribution is
alternately specified via a survival function. A vector of the same length as
tcutx1A
.
Left hand endpoints for intervals upon which the arm-1 specific
post-cause-B-delay-mortality rate is constant. The last given component is the
left hand endpoint of the interval having right hand endpoint infinity. Always
required when noncompliance
is set to any value other than “none”.
A vector of the same length as tcutx1B
containing the arm-1
post-cause-B-delay mortality rates. Always required when noncompliance
is set to any value other than “none”.
When required, the arm-1 post-cause-B-delay mortality distribution is
alternately specified via a survival function. A vector of the same length as
tcutx1B
.
Should the conversion to post-noncompliance mortality be gradual. Under
the default behavior, gradual
=FALSE
, there is an immediate conversion to
the post-noncompliance mortality rate function. If gradual
is set to
TRUE
then this conversion is done “gradually”. In truth, at the
individual level what is done is that the new mortality rate function is a convex
combination of the pre-noncompliance mortality and the post-noncompliance mortality,
with the weighting in proportion to the time spent in compliance with the study arm
protocal.
Specifies the name of a weighting function (of time) for assigning relative
weights to events according to the times at which they occur. The default,
“FH”, a two parameter weight function, specifies the
‘Fleming-Harrington’ g-rho
family of weighting functions defined as the
pooled arm survival function (Kaplan-Meier estimate) raised to the g
times its
complement raised to the rho
. Note that g
=rho
=0 corresponds to
the unweighted log-rank statistic. A second choice is the “SFH” function, (for
‘Stopped Fleming-Harrington’), meaning that the “FH” weights are capped
at their value at a user specified time, which has a total of 3 parameters. A third
choice is Ramp(tcut)
. Under this choice, weights are assigned in a linearly
manner from time 0 until a user specified cut-off time, tcut
, after which
events are weighted equally. It is possible to conduct computations on nstat
candidate statistics within a single run. In this case, WtFun
should be a
character vector of length nstat
having components set from among the available
choices.
A vector containing all the weight function parameters, in the order
determined by that of “WtFun”. For example, if WtFun
is set to
c("FH","SFH","Ramp")
then ppar
should be a vector of length six, with
the “FH” parameters in the first two elements, “SFH” parameters in the
next 3 elements, and “Ramp” parameter in the last element.
The relative risk corresponding to the alternative alternative
hypothesis that is required in the construction of the futility boundary. Required if
Boundary.Futility
is set to a non-null value.
When the test statistic is something other than the unweighted
log-rank statistic, the variance information, i.e. the ratio of variance at interim
analysis to variance at the end of trial, is something other than the ratio of events
at interim analysis to the events at trial end. The problem is that in practice one
doesn't necessarily have a good idea what the end of trial variance should be. In
this case the user may wish to spend the type I and type II error probabilities
according to a different time scale. Possible choices are “Variance”,
(default), which just uses the variance ratio scale, “Events”, which uses the
events ratio scale, “Hybrid(k)”, which makes a linear transition from the
“Variance” scale to the “Events” scale beginning with analysis number
k
. The last choice, “Calendar”, uses the calendar time scale
If a futility boundary is specified, what assumption should be
made about the drift function (the mean value of the weighted log-rank statistic at
analysis j
normalized by the square root of the variance function at analysis
k
). In practice we don't presume to know the shape of the drift function. Set
to “one” or “Q”. The choice “one” results in a more conservative
boundary.
If you specify method
==“S”, then you must specify the number
of simulations. 1000 should be sufficient.
If you specify method
==“S”, and want to see the full level
of detail regarding arguments returned from the C level code, specify
detail
==TRUE
If you specify method
==“S”, then the available choices are
“WLR” (weighted log-rank) and “ISD” (integrated survival difference).
Works only when method
==“S”. If a weighted log-rank
statistic is specified without maximum information having been stipulated in the
design then certain functionals, the Q
first and second moments, must be
projected. Setting this argument to TRUE
includes this projection into the
simulation runs.
Returns a value of class PwrGSD
which has components listed below. Note
that the print method will display a summary table of estimated powers and type I errors
as a nstat
by 2 matrix. The summary method returns the same object invisibly,
but after computing the summary table mentioned above, and it is included in the
returned value as a commponent TBL
. See examples below.
A length(tlook)
by nstat
matrix containing in each column,
an increment in power that resulted at that analysis time for the given statistic.
A length(tlook)
by nstat
matrix containing in each column,
an increment in type I error that resulted at that analysis time for the given
statistic. Always sums to the total alpha specified in alphatot
A list with components equal to the arguments of the C-call, which
correspond in a natural way to the arguments specified in the R call, along with the
computed results in palpha0vec
, palpha1vec
, inffrac
, and
mu
. The first two are identical to dErrorI
and dPower
, explained
above. The last two are length(tlook)
by nstat
matrices. For each
statistic specified in par
, the corresponding columns of pinffrac
and
mu
contain the information fraction and drift at each of the analysis times.
the call
Gu, M.-G. and Lai, T.-L. “Determination of power and sample size in the design of clinical trials with failure-time endpoints and interim analyses.” Controlled Clinical Trials 20 (5): 423-438. 1999
Izmirlian, G. “The PwrGSD package.” NCI Div. of Cancer Prevention Technical Report. 2004
Jennison, C. and Turnbull, B.W. (1999) Group Sequential Methods: Applications to Clinical Trials Chapman & Hall/Crc, Boca Raton FL
Proschan, M.A., Lan, K.K.G., Wittes, J.T. (2006), corr 2nd printing (2008) Statistical Monitoring of Clinical Trials A Unified Approach Springer Verlag, New York
Izmirlian G. (2014). Estimation of the relative risk following group sequential procedure based upon the weighted log-rank statistic. Statistics and its Interface 7(1), 27-42
# NOT RUN {
library(PwrGSD)
test.example <-
PwrGSD(EfficacyBoundary = LanDemets(alpha = 0.05, spending = ObrienFleming),
FutilityBoundary = LanDemets(alpha = 0.1, spending = ObrienFleming),
RR.Futility = 0.82, sided="1<",method="A",accru =7.73, accrat =9818.65,
tlook =c(7.14, 8.14, 9.14, 10.14, 10.64, 11.15, 12.14, 13.14,
14.14, 15.14, 16.14, 17.14, 18.14, 19.14, 20.14),
tcut0 =0:19, h0 =c(rep(3.73e-04, 2), rep(7.45e-04, 3),
rep(1.49e-03, 15)),
tcut1 =0:19, rhaz =c(1, 0.9125, 0.8688, 0.7814, 0.6941,
0.6943, 0.6072, 0.5202, 0.4332, 0.6520,
0.6524, 0.6527, 0.6530, 0.6534, 0.6537,
0.6541, 0.6544, 0.6547, 0.6551, 0.6554),
tcutc0 =0:19, hc0 =c(rep(1.05e-02, 2), rep(2.09e-02, 3),
rep(4.19e-02, 15)),
tcutc1 =0:19, hc1 =c(rep(1.05e-02, 2), rep(2.09e-02, 3),
rep(4.19e-02, 15)),
tcutd0B =c(0, 13), hd0B =c(0.04777, 0),
tcutd1B =0:6, hd1B =c(0.1109, 0.1381, 0.1485, 0.1637, 0.2446,
0.2497, 0),
noncompliance =crossover, gradual =TRUE,
WtFun =c("FH", "SFH", "Ramp"),
ppar =c(0, 1, 0, 1, 10, 10))
# }
Run the code above in your browser using DataLab