
The nma
function fits network meta-analysis and (multilevel) network
meta-regression models in Stan.
nma(
network,
consistency = c("consistency", "ume", "nodesplit"),
trt_effects = c("fixed", "random"),
regression = NULL,
class_interactions = c("common", "exchangeable", "independent"),
likelihood = NULL,
link = NULL,
...,
nodesplit = get_nodesplits(network, include_consistency = TRUE),
prior_intercept = .default(normal(scale = 100)),
prior_trt = .default(normal(scale = 10)),
prior_het = .default(half_normal(scale = 5)),
prior_het_type = c("sd", "var", "prec"),
prior_reg = .default(normal(scale = 10)),
prior_aux = .default(),
prior_aux_reg = .default(),
aux_by = NULL,
aux_regression = NULL,
QR = FALSE,
center = TRUE,
adapt_delta = NULL,
int_thin = 0,
int_check = TRUE,
mspline_degree = 3,
n_knots = 7,
knots = NULL,
mspline_basis = NULL
)
nma()
returns a stan_nma object, except when consistency = "nodesplit"
when a nma_nodesplit or nma_nodesplit_df object is
returned. nma.fit()
returns a stanfit object.
An nma_data
object, as created by the functions set_*()
,
combine_network()
, or add_integration()
Character string specifying the type of (in)consistency
model to fit, either "consistency"
, "ume"
, or "nodesplit"
Character string specifying either "fixed"
or "random"
effects
A one-sided model formula, specifying the prognostic and
effect-modifying terms for a regression model. Any references to treatment
should use the .trt
special variable, for example specifying effect
modifier interactions as variable:.trt
(see details).
Character string specifying whether effect modifier
interactions are specified as "common"
, "exchangeable"
, or
"independent"
.
Character string specifying a likelihood, if unspecified will be inferred from the data (see details)
Character string specifying a link function, if unspecified will default to the canonical link (see details)
Further arguments passed to
sampling()
, such as iter
,
chains
, cores
, etc.
For consistency = "nodesplit"
, the comparison(s) to split
in the node-splitting model(s). Either a length 2 vector giving the
treatments in a single comparison, or a 2 column data frame listing
multiple treatment comparisons to split in turn. By default, all possible
comparisons will be chosen (see get_nodesplits()
).
Specification of prior distribution for the intercept
Specification of prior distribution for the treatment effects
Specification of prior distribution for the heterogeneity
(if trt_effects = "random"
)
Character string specifying whether the prior
distribution prior_het
is placed on the heterogeneity standard deviation
"sd"
, the default), variance "var"
), or
precision "prec"
).
Specification of prior distribution for the regression
coefficients (if regression
formula specified)
Specification of prior distribution for the auxiliary
parameter, if applicable (see details). For likelihood = "gengamma"
this
should be a list of prior distributions with elements sigma
and k
.
Specification of prior distribution for the auxiliary
regression parameters, if aux_regression
is specified (see details).
Vector of variable names listing the variables to stratify the
auxiliary parameters by. Currently only used for survival models, see
details. Cannot be used with aux_regression
.
A one-sided model formula giving a regression model for
the auxiliary parameters. Currently only used for survival models, see
details. Cannot be used with aux_by
.
Logical scalar (default FALSE
), whether to apply a QR
decomposition to the model design matrix
Logical scalar (default TRUE
), whether to center the
(numeric) regression terms about the overall means
See adapt_delta for details
A single integer value, the thinning factor for returning
cumulative estimates of integration error. Saving cumulative estimates is
disabled by int_thin = 0
, which is the default.
Logical, check sufficient accuracy of numerical integration
by fitting half of the chains with n_int/2
? When TRUE
, Rhat
and
n_eff
diagnostic warnings will be given if numerical integration has not
sufficiently converged (suggesting increasing n_int
in
add_integration()
). Default TRUE
, but disabled (FALSE
) when
int_thin > 0
.
Non-negative integer giving the degree of the M-spline
polynomial for likelihood = "mspline"
. Piecewise exponential hazards
(likelihood = "pexp"
) are a special case with mspline_degree = 0
.
For mspline
and pexp
likelihoods, a non-negative integer
giving the number of internal knots for partitioning the baseline hazard
into intervals. The knot locations within each study will be determined by
the corresponding quantiles of the observed event times, plus boundary
knots at the earliest entry time (0 with no delayed entry) and the maximum
event/censoring time. For example, with n_knots = 3
, the internal knot
locations will be at the 25%, 50%, and 75% quantiles of the observed event
times. The default is n_knots = 7
; overfitting is avoided by shrinking
towards a constant hazard with a random walk prior (see details). If
aux_regression
is specified then a single set of knot locations will be
calculated across all studies in the network. See make_knots()
for more
details on the knot positioning algorithms. Ignored when knots
is
specified.
For mspline
and pexp
likelihoods, a named list of numeric
vectors of knot locations (including boundary knots) for each of the
studies in the network. Currently, each vector must have the same length
(i.e. each study must use the same number of knots). Alternatively, a
single numeric vector of knot locations can be provided which will be
shared across all studies in the network. If unspecified (the default), the
knots will be chosen based on n_knots
as described above. If
aux_regression
is specified then knots
should be a single numeric
vector of knot locations which will be shared across all studies in the
network. make_knots()
can be used to help specify knots
directly, or to
investigate knot placement prior to model fitting.
Instead of specifying mspline_degree
and n_knots
or
knots
, a named list of M-spline bases (one for each study) can be
provided with mspline_basis
which will be used directly. In this case,
all other M-spline options will be ignored.
Currently, the following likelihoods and link functions are supported for each data type:
Data type | Likelihood | Link function |
Binary | bernoulli , bernoulli2 | logit , probit , cloglog |
Count | binomial , binomial2 | logit , probit , cloglog |
Rate | poisson | log |
Continuous | normal | identity , log |
Ordered | ordered | logit , probit , cloglog |
Survival | exponential , weibull , gompertz , exponential-aft , weibull-aft , lognormal , loglogistic , gamma , gengamma , mspline , pexp | log |
The bernoulli2
and binomial2
likelihoods correspond to a two-parameter
Binomial likelihood for arm-based AgD, which more closely matches the
underlying Poisson Binomial distribution for the summarised aggregate
outcomes in a ML-NMR model than the typical (one parameter) Binomial
distribution @see @methods_papermultinma.
When a cloglog
link is used, including an offset for log follow-up time
(i.e. regression = ~offset(log(time))
) results in a model on the log
hazard @see @TSD2multinma.
For survival data, all accelerated failure time models (exponential-aft
,
weibull-aft
, lognormal
, loglogistic
, gamma
, gengamma
) are
parameterised so that the treatment effects and any regression parameters
are log Survival Time Ratios (i.e. a coefficient of
Further details on each likelihood and link function are given by TSD2;textualmultinma.
Auxiliary parameters are only present in the following models.
When a Normal likelihood is fitted to IPD, the auxiliary parameters are the
arm-level standard deviations
When fitting a model to
All survival likelihoods except the exponential
and exponential-aft
likelihoods have auxiliary parameters. These are typically study-specific
shape parameters lognormal
likelihood
where the auxiliary parameters are study-specific standard deviations on
the log scale
The gengamma
likelihood has two sets of auxiliary parameters,
study-specific scale parameters flexsurv
package with
For the mspline
and pexp
likelihoods, the auxiliary parameters are the
spline coefficients for each study. These form a unit simplex (i.e. lie
between 0 and 1, and sum to 1), and are given a random walk prior
distribution. prior_aux
specifies the hyperprior on the random walk
standard deviation
The auxiliary parameters can be stratified by additional factors through
the aux_by
argument. For example, to allow the shape of the baseline
hazard to vary between treatment arms as well as studies, use aux_by = c(".study", ".trt")
. (Technically, .study
is always included in the
stratification even if omitted from aux_by
, but we choose here to make
the stratification explicit.) This is a common way of relaxing the
proportional hazards assumption. The default is equivalent to aux_by = ".study"
which stratifies the auxiliary parameters by study, as described
above.
A regression model may be specified on the auxiliary parameters using
aux_regression
. This is useful if we wish to model departures from
non-proportionality, rather than allowing the baseline hazards to be
completely independent using aux_by
. This is necessary if absolute
predictions (e.g. survival curves) are required in a population for
unobserved combinations of covariates; for example, if aux_by = .trt
then
absolute predictions may only be produced for the observed treatment arms
in each study population, whereas if aux_regression = ~.trt
then absolute
predictions can be produced for all treatments in any population. For
mspline
and pexp
likelihoods, the regression coefficients are smoothed
over time using a random walk prior to avoid overfitting: prior_aux_reg
specifies the hyperprior for the random walk standard deviation. For other
parametric likelihoods, prior_aux_reg
specifies the prior for the
auxiliary regression coefficients.
When specifying a model formula in the regression
argument, the
usual formula syntax is available (as interpreted by model.matrix()
). The
only additional requirement here is that the special variable .trt
should
be used to refer to treatment. For example, effect modifier interactions
should be specified as variable:.trt
. Prognostic (main) effects and
interactions can be included together compactly as variable*.trt
, which
expands to variable + variable:.trt
(plus .trt
, which is already in the
NMA model).
For the advanced user, the additional specials .study
and .trtclass
are
also available, and refer to studies and (if specified) treatment classes
respectively. When node-splitting models are fitted (consistency = "nodesplit"
) the special .omega
is available, indicating the arms
to which the node-splitting inconsistency factor is added.
See ?priors
for details on prior
specification. Default prior distributions are available, but may not be
appropriate for the particular setting and will raise a warning if used. No
attempt is made to tailor these defaults to the data provided. Please
consider appropriate prior distributions for the particular setting,
accounting for the scales of outcomes and covariates, etc. The function
plot_prior_posterior()
may be useful in examining the influence of the
chosen prior distributions on the posterior distributions, and the
summary()
method for nma_prior
objects prints prior intervals.
## Smoking cessation NMA
# Set up network of smoking cessation data
head(smoking)
smk_net <- set_agd_arm(smoking,
study = studyn,
trt = trtc,
r = r,
n = n,
trt_ref = "No intervention")
# Print details
smk_net
# \donttest{
# Fitting a fixed effect model
smk_fit_FE <- nma(smk_net, refresh = if (interactive()) 200 else 0,
trt_effects = "fixed",
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 100))
smk_fit_FE
# }
# \donttest{
# Fitting a random effects model
smk_fit_RE <- nma(smk_net, refresh = if (interactive()) 200 else 0,
trt_effects = "random",
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 100),
prior_het = normal(scale = 5))
smk_fit_RE
# }
# \donttest{
# Fitting an unrelated mean effects (inconsistency) model
smk_fit_RE_UME <- nma(smk_net, refresh = if (interactive()) 200 else 0,
consistency = "ume",
trt_effects = "random",
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 100),
prior_het = normal(scale = 5))
smk_fit_RE_UME
# }
# \donttest{
# Fitting all possible node-splitting models
smk_fit_RE_nodesplit <- nma(smk_net, refresh = if (interactive()) 200 else 0,
consistency = "nodesplit",
trt_effects = "random",
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 100),
prior_het = normal(scale = 5))
# }
# \donttest{
# Summarise the node-splitting results
summary(smk_fit_RE_nodesplit)
# }
## Plaque psoriasis ML-NMR
# Set up plaque psoriasis network combining IPD and AgD
library(dplyr)
pso_ipd <- filter(plaque_psoriasis_ipd,
studyc %in% c("UNCOVER-1", "UNCOVER-2", "UNCOVER-3"))
pso_agd <- filter(plaque_psoriasis_agd,
studyc == "FIXTURE")
head(pso_ipd)
head(pso_agd)
pso_ipd <- pso_ipd %>%
mutate(# Variable transformations
bsa = bsa / 100,
prevsys = as.numeric(prevsys),
psa = as.numeric(psa),
weight = weight / 10,
durnpso = durnpso / 10,
# Treatment classes
trtclass = case_when(trtn == 1 ~ "Placebo",
trtn %in% c(2, 3, 5, 6) ~ "IL blocker",
trtn == 4 ~ "TNFa blocker"),
# Check complete cases for covariates of interest
complete = complete.cases(durnpso, prevsys, bsa, weight, psa)
)
pso_agd <- pso_agd %>%
mutate(
# Variable transformations
bsa_mean = bsa_mean / 100,
bsa_sd = bsa_sd / 100,
prevsys = prevsys / 100,
psa = psa / 100,
weight_mean = weight_mean / 10,
weight_sd = weight_sd / 10,
durnpso_mean = durnpso_mean / 10,
durnpso_sd = durnpso_sd / 10,
# Treatment classes
trtclass = case_when(trtn == 1 ~ "Placebo",
trtn %in% c(2, 3, 5, 6) ~ "IL blocker",
trtn == 4 ~ "TNFa blocker")
)
# Exclude small number of individuals with missing covariates
pso_ipd <- filter(pso_ipd, complete)
pso_net <- combine_network(
set_ipd(pso_ipd,
study = studyc,
trt = trtc,
r = pasi75,
trt_class = trtclass),
set_agd_arm(pso_agd,
study = studyc,
trt = trtc,
r = pasi75_r,
n = pasi75_n,
trt_class = trtclass)
)
# Print network details
pso_net
# Add integration points to the network
pso_net <- add_integration(pso_net,
durnpso = distr(qgamma, mean = durnpso_mean, sd = durnpso_sd),
prevsys = distr(qbern, prob = prevsys),
bsa = distr(qlogitnorm, mean = bsa_mean, sd = bsa_sd),
weight = distr(qgamma, mean = weight_mean, sd = weight_sd),
psa = distr(qbern, prob = psa),
n_int = 64)
# \donttest{
# Fitting a ML-NMR model.
# Specify a regression model to include effect modifier interactions for five
# covariates, along with main (prognostic) effects. We use a probit link and
# specify that the two-parameter Binomial approximation for the aggregate-level
# likelihood should be used. We set treatment-covariate interactions to be equal
# within each class. We narrow the possible range for random initial values with
# init_r = 0.1, since probit models in particular are often hard to initialise.
# Using the QR decomposition greatly improves sampling efficiency here, as is
# often the case for regression models.
pso_fit <- nma(pso_net, refresh = if (interactive()) 200 else 0,
trt_effects = "fixed",
link = "probit",
likelihood = "bernoulli2",
regression = ~(durnpso + prevsys + bsa + weight + psa)*.trt,
class_interactions = "common",
prior_intercept = normal(scale = 10),
prior_trt = normal(scale = 10),
prior_reg = normal(scale = 10),
init_r = 0.1,
QR = TRUE)
pso_fit
# }
## Newly-diagnosed multiple myeloma NMA
# Set up newly-diagnosed multiple myeloma network
head(ndmm_ipd)
head(ndmm_agd)
ndmm_net <- combine_network(
set_ipd(ndmm_ipd,
study, trt,
Surv = Surv(eventtime / 12, status)),
set_agd_surv(ndmm_agd,
study, trt,
Surv = Surv(eventtime / 12, status),
covariates = ndmm_agd_covs))
# \donttest{
# Fit Weibull (PH) model
ndmm_fit <- nma(ndmm_net, refresh = if (interactive()) 200 else 0,
likelihood = "weibull",
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 10),
prior_aux = half_normal(scale = 10))
ndmm_fit
# }
Run the code above in your browser using DataLab