Function for calculating the sample size needed to have a pre-specified power for the one-sided non-inferiority t-test for normal or log-normal distributed data.
sampleN.noninf(alpha = 0.025, targetpower = 0.8, logscale = TRUE,
margin,theta0, CV, design = "2x2", robust = FALSE,
details = FALSE, print = TRUE, imax=100)
Type I error probability, significance level. Defaults here to 0.025.
Power to achieve at least. Must be >0 and <1. Typical values are 0.8 or 0.9.
Should the data used on log-transformed or on original scale? TRUE
(default) or FALSE
.
Non-inferiority margin.
In case of logscale=TRUE
it must be given as ratio, otherwise as a difference to 1.
Defaults to 0.8 if logscale=TRUE
or to -0.2 if logscale=FALSE
.
‘True’ or assumed T/R ratio or difference (T--R).
In case of logscale=TRUE
it must be given as ratio,
otherwise as difference to 1. See examples.
Defaults to 0.95 if logscale=TRUE
or to 0.05 if logscale=FALSE
Coefficient of variation as ratio. In case of cross-over studies this is the within-subject CV and in case of a parallel-group design the CV of the total variability.
Character string describing the study design.
See known.designs
for designs covered in this package.
Defaults to FALSE. With that value the usual degrees of freedom will be used.
Set to TRUE
will use the degrees of freedom according to the ‘robust’ evaluation
(aka Senn<U+2019>s basic estimator). These df are calculated as n-seq
.
See known.designs()$df2
for designs covered in this package.
Has only effect for higher-order crossover designs.
If TRUE
the design characteristics and the steps during
sample size calculations will be shown.
Defaults to FALSE
.
If TRUE
(default) the function prints its results.
If FALSE
only the data.frame with the results will be returned.
Maximum number of steps in sample size search. Defaults to 100. Adaption only in rare cases needed.
A data.frame with the input settings and results will be returned.
Explore it with str(sampleN.noninf(...)
The function does not vectorize properly. If you need sample sizes with varying CVs, use f.i. for-loops or the apply-family.
The sample size is calculated via iterative evaluation of power.noninf()
.
Start value for the sample size search is taken from a large sample approximation.
The sample size is bound to 4 as minimum.
Notes on the underlying hypotheses
If the supplied margin is < 0 (logscale=FALSE
) or < 1 (logscale=TRUE
),
then it is assumed higher response values are better. The hypotheses are
H0: theta0 <= margin vs. H1: theta0 > margin
where theta0 = mean(test)-mean(reference)
if logscale=FALSE
or
H0: log(theta0) <= log(margin) vs. H1: log(theta0) > log(margin)
where theta0 = mean(test)/mean(reference)
if logscale=TRUE
.
If the supplied margin is > 0 (logscale=FALSE
) or > 1 (logscale=TRUE
),
then it is assumed lower response values are better. The hypotheses are
H0: theta0 >= margin vs. H1: theta0 < margin
where theta0 = mean(test)-mean(reference)
if logscale=FALSE
or
H0: log(theta0) >= log(margin) vs. H1: log(theta0) < log(margin)
where theta0 = mean(test)/mean(reference)
if logscale=TRUE
.
This latter case may also be considered as ‘non-superiority’.
Julious SA. Sample sizes for clinical trials with Normal data. Stat Med. 2004;23(12):1921--86. 10.1002/sim.1783
# NOT RUN {
# using all the defaults: margin=0.8, theta0=0.95, alpha=0.025
# log-transformed, design="2x2"
sampleN.noninf(CV = 0.3)
# should give n=48
#
# 'non-superiority' case, log-transformed data
# with assumed 'true' ratio somewhat above 1
sampleN.noninf(CV = 0.3, targetpower = 0.9,
margin = 1.25, theta0 = 1.05)
# should give n=62
# }
Run the code above in your browser using DataLab