metafor (version 2.4-0)

rma.mv: Meta-Analysis via Multivariate/Multilevel Linear (Mixed-Effects) Models

Description

Function to fit meta-analytic multivariate/multilevel fixed- and random/mixed-effects models with or without moderators via linear (mixed-effects) models. See below and the documentation of the metafor-package for more details on these models.

Usage

rma.mv(yi, V, W, mods, random, struct="CS", intercept=TRUE,
       data, slab, subset, method="REML", test="z",
       level=95, digits, btt, R, Rscale="cor",
       sigma2, tau2, rho, gamma2, phi, sparse=FALSE, verbose=FALSE, control, …)

Arguments

yi

vector of length \(k\) with the observed effect sizes or outcomes. See ‘Details’.

V

vector of length \(k\) with the corresponding sampling variances or a \(k \times k\) variance-covariance matrix of the sampling errors. See ‘Details’.

W

optional argument to specify a vector of length \(k\) with user-defined weights or a \(k \times k\) user-defined weight matrix. See ‘Details’.

mods

optional argument to include one or more moderators in the model. A single moderator can be given as a vector of length \(k\) specifying the values of the moderator. Multiple moderators are specified by giving a matrix with \(k\) rows and as many columns as there are moderator variables. Alternatively, a model formula can be used to specify the model. See ‘Details’.

random

either a single one-sided formula or list of one-sided formulas to specify the random-effects structure of the model. See ‘Details’.

struct

character string to specify the variance structure of an ~ inner | outer formula in the random argument. Either "CS" for compound symmetry, "HCS" for heteroscedastic compound symmetry, "UN" for an unstructured variance-covariance matrix, "ID" for a scaled identity matrix, "DIAG" for a diagonal matrix, "AR" for an AR(1) autoregressive structure, "HAR" for a heteroscedastic AR(1) autoregressive structure, "CAR" for a continuous-time autoregressive structure, or one of "SPEXP", "SPGAU", "SPLIN", "SPRAT", or "SPSPH" for one of the spatial correlation structures. See ‘Details’.

intercept

logical indicating whether an intercept should be added to the model (the default is TRUE). Ignored when mods is a formula.

data

optional data frame containing the data supplied to the function.

slab

optional vector with labels for the \(k\) outcomes/studies.

subset

optional (logical or numeric) vector indicating the subset of studies (or more precisely, rows of the dataset) that should be used for the analysis.

method

character string specifying whether the model should be fitted via maximum-likelihood ("ML") or via restricted maximum-likelihood ("REML") estimation. Default is "REML".

test

character string specifying how test statistics and confidence intervals for the fixed effects should be computed. By default (test="z"), Wald-type tests and CIs are obtained, which are based on a standard normal distribution. When test="t", a t-distribution is used instead. See ‘Details’.

level

numerical value between 0 and 100 specifying the confidence interval level (the default is 95).

digits

integer specifying the number of decimal places to which the printed results should be rounded (if unspecified, the default is 4).

btt

optional vector of indices specifying which coefficients to include in the omnibus test of moderators. Can also be a string to grep for. See ‘Details’.

R

an optional named list of known correlation matrices corresponding to (some of) the components specified via the random argument. See ‘Details’.

Rscale

character string, integer, or logical indicating how matrices specified via the R argument should be scaled. See ‘Details’.

sigma2

optional numerical vector (of the same length as the number of random intercept components specified via the random argument) to fix the corresponding \(\sigma<U+00B2>\) value(s). A specific \(\sigma<U+00B2>\) value can be fixed by setting the corresponding element of this argument to the desired value. A specific \(\sigma<U+00B2>\) value will be estimated if the corresponding element is set equal to NA. See ‘Details’.

tau2

optional numerical value (for struct="CS", "AR", "CAR", or a spatial correlation structure) or vector (for struct="HCS", "UN", or "HAR") to fix the amount of (residual) heterogeneity for the levels of the inner factor corresponding to an ~ inner | outer formula specified in the random argument. A numeric value fixes a particular \(\tau<U+00B2>\) value, while NA means that the value should be estimated. See ‘Details’.

rho

optional numerical value (for struct="CS", "HCS", "AR", "HAR", "CAR", or a spatial correlation structure) or vector (for struct="UN") to fix the correlation between the levels of the inner factor corresponding to an ~ inner | outer formula specified in the random argument. A numeric value fixes a particular \(\rho\) value, while NA means that the value should be estimated. See ‘Details’.

gamma2

as tau2 argument, but for a second ~ inner | outer formula specified in the random argument. See ‘Details’.

phi

as rho argument, but for a second ~ inner | outer formula specified in the random argument. See ‘Details’.

sparse

logical indicating whether the function should use sparse matrix objects to the extent possible (can speed up model fitting substantially for certain models). See ‘Note’.

verbose

logical indicating whether output should be generated on the progress of the model fitting (the default is FALSE). Can also be an integer. Values > 1 generate more verbose output. See ‘Note’.

control

optional list of control values for the estimation algorithms. If unspecified, default values are defined inside the function. See ‘Note’.

additional arguments.

Value

An object of class c("rma.mv","rma"). The object is a list containing the following components:

beta

estimated coefficients of the model.

se

standard errors of the coefficients.

zval

test statistics of the coefficients.

pval

p-values for the test statistics.

ci.lb

lower bound of the confidence intervals for the coefficients.

ci.ub

upper bound of the confidence intervals for the coefficients.

vb

variance-covariance matrix of the estimated coefficients.

sigma2

estimated \(\sigma<U+00B2>\) value(s).

tau2

estimated \(\tau<U+00B2>\) value(s).

rho

estimated \(\rho\) value(s).

gamma2

estimated \(\gamma<U+00B2>\) value(s).

phi

estimated \(\phi\) value(s).

k

number of studies included in the model.

p

number of coefficients in the model (including the intercept).

m

number of coefficients included in the omnibus test of coefficients.

QE

test statistic for the test of (residual) heterogeneity.

QEp

p-value for the test of (residual) heterogeneity.

QM

test statistic for the omnibus test of coefficients.

QMp

p-value for the omnibus test of coefficients.

int.only

logical that indicates whether the model is an intercept-only model.

yi, V, X

the vector of outcomes, the corresponding variance-covariance matrix of the sampling errors, and the model matrix.

M

the estimated marginal variance-covariance matrix of the observed outcomes.

fit.stats

a list with the log-likelihood, deviance, AIC, BIC, and AICc values.

some additional elements/values.

Methods

The results of the fitted model are formatted and printed with the print.rma.mv function. If fit statistics should also be given, use summary.rma (or use the fitstats.rma function to extract them). Full versus reduced model comparisons in terms of fit statistics and likelihoods can be obtained with anova.rma. Wald-type tests for sets of model coefficients or linear combinations thereof can be obtained with the same function. Tests and confidence intervals based on (cluster) robust methods can be obtained with robust.rma.mv.

Predicted/fitted values can be obtained with predict.rma and fitted.rma. For best linear unbiased predictions, see ranef.rma.mv.

The residuals.rma, rstandard.rma.mv, and rstudent.rma.mv functions extract raw and standardized residuals. See influence.rma.mv for additional case diagnostics (e.g., to determine influential studies). For models with moderators, variance inflation factors can be obtained with vif.rma.

Confidence intervals for any variance/correlation parameters in the model can be obtained with confint.rma.mv.

For random/mixed-effects models, the profile.rma.mv function can be used to obtain a plot of the (restricted) log-likelihood as a function of a specific variance component or correlation parameter of the model.

Other extractor functions include coef.rma, vcov.rma, logLik.rma, deviance.rma, AIC.rma, BIC.rma, hatvalues.rma.mv, and weights.rma.mv.

Details

Specifying the Data

The function can be used in conjunction with any of the usual effect size or outcome measures used in meta-analyses (e.g., log risk ratios, log odds ratios, risk differences, mean differences, standardized mean differences, raw correlation coefficients, correlation coefficients transformed with Fisher's r-to-z transformation, and so on). Simply specify the observed outcomes via the yi argument and the corresponding sampling variances via the V argument. In case the sampling errors are correlated, then one can specify the entire variance-covariance matrix of the sampling errors via the V argument.

The escalc function can be used to compute a wide variety of effect size and outcome measures (and the corresponding sampling variances) based on summary statistics. Equations for computing the covariance between sampling errors for a variety of different effect size or outcome measures can be found, for example, in Gleser and Olkin (2009). For raw and Fisher's r-to-z transformed correlations, one can find suitable equations, for example, in Steiger (1980).

Specifying Fixed Effects

With rma.mv(yi, V), a fixed-effects model is fitted to the data (note: arguments struct, sigma2, tau2, rho, gamma2, phi, R, and Rscale are not relevant then and are ignored). The model is then simply given by \(y ~ N(1 \beta<U+2080>, V)\), where \(y\) is the (column) vector with the observed effect sizes or outcomes, \(1\) is a column vector of 1's, \(\beta<U+2080>\) is the (average) true effect size or outcome, and \(V\) is the variance-covariance matrix of the sampling errors (if a vector of sampling variances is provided via the V argument, then \(V\) is assumed to be diagonal).

One or more moderators can be included in the model via the mods argument. A single moderator can be given as a (row or column) vector of length \(k\) specifying the values of the moderator. Multiple moderators are specified by giving an appropriate model matrix (i.e., \(X\)) with \(k\) rows and as many columns as there are moderator variables (e.g., mods = cbind(mod1, mod2, mod3), where mod1, mod2, and mod3 correspond to the names of the variables for the three moderator variables). The intercept is added to the model matrix by default unless intercept=FALSE.

Alternatively, one can use the standard formula syntax to specify the model. In this case, the mods argument should be set equal to a one-sided formula of the form mods = ~ model (e.g., mods = ~ mod1 + mod2 + mod3). Interactions, polynomial terms, and factors can be easily added to the model in this manner. When specifying a model formula via the mods argument, the intercept argument is ignored. Instead, the inclusion/exclusion of the intercept is controlled by the specified formula (e.g., mods = ~ mod1 + mod2 + mod3 - 1 would lead to the removal of the intercept). One can also directly specify moderators via the yi argument (e.g., rma.mv(yi ~ mod1 + mod2 + mod3, V)). In that case, the mods argument is ignored and the inclusion/exclusion of the intercept again is controlled by the specified formula.

With moderators included, the model is then given by \(y ~ N(X \beta, V)\), where \(X\) denotes the model matrix containing the moderator values (and possibly the intercept) and \(\beta\) is a column vector containing the corresponding model coefficients. The model coefficients (i.e., \(\beta\)) are then estimated with \(b = (X'WX)<U+207B><U+00B9> X'Wy\), where \(W = V<U+207B><U+00B9>\) is the weight matrix. With the W argument, one can also specify user-defined weights (or a weight matrix).

Specifying Random Effects

One can fit random/mixed-effects models to the data by specifying the desired random effects structure via the random argument. The random argument is either a single one-sided formula or a list of one-sided formulas. One formula type that can be specified via this argument is of the form random = ~ 1 | id. Such a formula adds random effects/intercepts corresponding to the grouping variable/factor id to the model. Effects or outcomes with the same value/level of the id variable/factor receive the same random effect, while effects or outcomes with different values/levels of the id variable/factor are assumed to be independent. The variance component corresponding to such a formula is denoted by \(\sigma<U+00B2>\). An arbitrary number of such formulas can be specified as a list of formulas (e.g., random = list(~ 1 | id1, ~ 1 | id2)), with variance components \(\sigma<U+00B2><U+2081>\), \(\sigma<U+00B2><U+2082>\), and so on. Nested random effects of this form can also be added using random = ~ 1 | id1/id2, which adds random effects/intercepts for each level of id1 and random effects/intercepts for each level of id2 within id1.

Random effects of this form are useful to model clustering (and hence non-independence) induced by a multilevel structure in the data (e.g., effects derived from the same paper, lab, research group, or species may be more similar to each other than effects derived from different papers, labs, research groups, or species). See, for example, Konstantopoulos (2011) and Nakagawa and Santos (2012) for more details.

See dat.konstantopoulos2011 for an example analysis with multilevel meta-analytic data.

In addition or alternatively to specifying one or multiple ~ 1 | id terms, the random argument can also contain a formula of the form ~ inner | outer. Effects or outcomes with different values/levels of the outer grouping variable/factor are assumed to be independent, while effects or outcomes with the same value/level of the outer grouping variable/factor share correlated random effects corresponding to the levels of the inner grouping variable/factor (note that the inner grouping variable must either be a factor or a character variable). The struct argument is used to specify the variance structure corresponding to the inner variable/factor. With struct="CS", a compound symmetric structure is assumed (i.e., a single variance component \(\tau<U+00B2>\) corresponding to all values/levels of the inner variable/factor and a single correlation coefficient \(\rho\) for the correlation between the different values/levels). With struct="HCS", a heteroscedastic compound symmetric structure is assumed (with variance components \(\tau<U+00B2><U+2081>\), \(\tau<U+00B2><U+2082>\), and so on, corresponding to the values/levels of the inner variable/factor and a single correlation coefficient \(\rho\) for the correlation between the different values/levels). With struct="UN", an unstructured variance-covariance matrix is assumed (with variance components \(\tau<U+00B2><U+2081>\), \(\tau<U+00B2><U+2082>\), and so on, corresponding to the values/levels of the inner variable/factor and correlation coefficients \(\rho<U+2081><U+2082>\), \(\rho<U+2081><U+2083>\), \(\rho<U+2082><U+2083>\), and so on, for the various combinations of the values/levels of the inner variable/factor).

Structures struct="ID" and struct="DIAG" are just like struct="CS" and struct="HCS", respectively, except that \(\rho\) is automatically set to 0, so that we either get a scaled identity matrix or a diagonal matrix.

With the outer factor corresponding to a study identification variable and the inner factor corresponding to a variable indicating the treatment type or study arm, such a random effects component could be used to estimate how strongly different true effects or outcomes within the same study are correlated and/or whether the amount of heterogeneity differs across different treatment types/arms. Network meta-analyses (also known as mixed treatment comparison meta-analyses) will also typically require such a random effects component (e.g., Salanti et al., 2008). The meta-analytic bivariate model (e.g., van Houwelingen, Arends, & Stijnen, 2002) can also be fitted in this manner (see the examples below). The inner factor could also correspond to a variable indicating the outcome variable, which allows for fitting multivariate models when there are multiple correlated effects per study (e.g., Berkey et al., 1998; Kalaian & Raudenbush, 1996).

See dat.berkey1998 for an example of a multivariate meta-analysis with multiple outcomes. See dat.hasselblad1998 and dat.senn2013 for examples of network meta-analyses.

For meta-analyses of studies reporting outcomes at multiple time points, it may also be reasonable to assume that the true effects are correlated over time according to an autoregressive structure (Ishak et al., 2007; Trikalinos & Olkin, 2012). For this purpose, one can also choose struct="AR", corresponding to a structure with a single variance component \(\tau<U+00B2>\) and AR(1) autocorrelation among the random effects. The values of the inner variable (which does not have to be a factor here) should then reflect the various time points, with increasing values reflecting later time points. Note that this structure assumes equally spaced time points, so the actual values of the inner variable are not relevant, only their ordering. One can also use struct="HAR", which allows for fitting a heteroscedastic AR(1) structure (with variance components \(\tau<U+00B2><U+2081>\), \(\tau<U+00B2><U+2082>\), and so on). Finally, when time points are not evenly spaced, one might consider using struct="CAR" for a continuous-time autoregressive structure.

See dat.fine1993 and dat.ishak2007 for examples involving such structures.

For outcomes that have a known spatial configuration, various spatial correlation structures are also available. For these structures, the formula is of the form random = ~ var1 + var2 + … | outer, where var1, var2, and so on are variables to indicate the spatial coordinates (e.g., longitude and latitude) based on which distances (by default Euclidean) will be computed. Let \(d\) denote the distance between two points that share the same level of the outer variable (if all true effects are allowed to be spatially correlated, simply set outer to a constant). Then the correlation between the true effects corresponding to these two points is a function of \(d\) and the parameter \(\rho\). The following table shows the types of spatial correlation structures that can be specified and the equations for the correlation. The covariance between the true effects is then the correlation times \(\tau<U+00B2>\).

structure struct correlation
exponential "SPEXP" \(exp(-d/\rho)\)
Gaussian "SPGAU" \(exp(-d<U+00B2>/\rho<U+00B2>)\)
linear "SPLIN" \((1 - d/\rho) I(d < \rho)\)
rational quadratic "SPRAT" \(1 - (d/\rho)<U+00B2> / (1 + (d/\rho)<U+00B2>)\)

Note that \(I(d < \rho)\) is equal to 1 if \(d < \rho\) and 0 otherwise. The parameterization of the various structures is based on Pinheiro and Bates (2000). Instead of Euclidean distances, one can also use other distance measures by setting (the undocumented) argument dist to either "maximum" for the maximum distance between two points (supremum norm), to "manhattan" for the absolute distance between the coordinate vectors (L1 norm), or to "gcd" for the great-circle distance (WGS84 ellipsoid method). In the latter case, only two variables, namely the longitude and latitude (in decimal degrees, with minus signs for West and South), must be specified.

The random argument can also contain a second formula of the form ~ inner | outer (but no more!). A second formula of this form works exactly described as above, but its variance components are denoted by \(\gamma<U+00B2>\) and its correlation components by \(\phi\). The struct argument should then be of length 2 to specify the variance-covariance structure for the first and second component, respectively.

When the random argument contains a formula of the form ~ 1 | id, one can use the (optional) argument R to specify a corresponding known correlation matrix of the random effects (i.e., R = list(id = Cor), where Cor is the correlation matrix). In that case, effects or outcomes with the same value/level of the id variable/factor receive the same random effect, while effects or outcomes with different values/levels of the id variable/factor receive random effects that are correlated as specified in the corresponding correlation matrix given via the R argument. The column/row names of the correlation matrix given via the R argument must therefore contain all of the values/levels of the id variable/factor. When the random argument contains multiple formulas of the form ~ 1 | id, one can specify known correlation matrices for none, some, or all of those terms (e.g., with random = list(~ 1 | id1, ~ 1 | id2), one could specify R = list(id1 = Cor1) or R = list(id1 = Cor1, id2 = Cor2), where Cor1 and Cor2 are the correlation matrices corresponding to the grouping variables/factors id1 and id2, respectively).

Random effects with a known (or at least approximately known) correlation structure are useful in a variety of contexts. For example, such components can be used to account for the correlations induced by a shared phylogenetic history among organisms (e.g., plants, fungi, animals). In that case, ~ 1 | species is used to specify the species and argument R is used to specify the phylogenetic correlation matrix of the species studied in the meta-analysis. The corresponding variance component then indicates how much variance/heterogeneity is attributable to the specified phylogeny. See Nakagawa and Santos (2012) for more details. As another example, in a genetic meta-analysis studying disease association for several single nucleotide polymorphisms (SNPs), linkage disequilibrium (LD) among the SNPs can induce an approximately known degree of correlation among the effects. In that case, ~ 1 | snp could be used to specify the SNPs and R the corresponding LD correlation map for the SNPs included in the meta-analysis.

The Rscale argument controls how matrices specified via the R argument are scaled. With Rscale="none" (or Rscale=0 or Rscale=FALSE), no scaling is used. With Rscale="cor" (or Rscale=1 or Rscale=TRUE), the cov2cor function is used to ensure that the matrices are correlation matrices (assuming they were covariance matrices to begin with). With Rscale="cor0" (or Rscale=2), first cov2cor is used and then the elements of each correlation matrix are scaled with \((R - min(R)) / (1 - min(R))\) (this ensures that a correlation of zero in a phylogenetic correlation matrix corresponds to the split at the root node of the tree comprising the species that are actually analyzed). Finally, Rscale="cov0" (or Rscale=3) only rescales with \((R - min(R))\) (which ensures that a phylogenetic covariance matrix is rooted at the lowest split).

Together with the variance-covariance matrix of the sampling errors (i.e., \(V\)), the random effects structure of the model implies a particular marginal variance-covariance matrix of the observed outcomes. Once estimates of the variance components (i.e., \(\sigma<U+00B2>\), \(\tau<U+00B2>\), \(\rho\), \(\gamma<U+00B2>\), and/or \(\phi\), values) have been obtained, the estimated marginal variance-covariance matrix can be constructed (denoted by \(M\)). The model coefficients (i.e., \(\beta\)) are then estimated with \(b = (X'WX)<U+207B><U+00B9> X'Wy\), where \(W = M<U+207B><U+00B9>\) is the weight matrix. With the W argument, one can again specify user-defined weights (or a weight matrix).

Fixing Variance Components and/or Correlations

Arguments sigma2, tau2, rho, gamma2, and phi can be used to fix particular variance components and/or correlations at a given value. This is useful for sensitivity analyses (e.g., for plotting the regular/restricted log-likelihood as a function of a particular variance component or correlation) or for imposing a desired variance-covariance structure on the data.

For example, if random = list(~ 1 | id1, ~ 1 | id2), then sigma2 must be of length 2 (corresponding to \(\sigma<U+00B2><U+2081>\) and \(\sigma<U+00B2><U+2082>\)) and a fixed value can be assigned to either or both variance components. Setting a particular component to NA means that the component will be estimated by the function (e.g., sigma2=c(0,NA) would fix \(\sigma<U+00B2><U+2081>\) to 0 and estimate \(\sigma<U+00B2><U+2082>\)).

Argument tau2 is only relevant when the random argument contains an ~ inner | outer formula. In that case, if the tau2 argument is used, it must be either of length 1 (for "CS", "ID", "AR", "CAR", or one of the spatial correlation structures) or of the same length as the number of levels of the inner factor (for "HCS", "DIAG", "UN", or "HAR"). A numeric value in the tau2 argument then fixes the corresponding variance component to that value, while NA means that the component will be estimated. Similarly, if argument rho is used, it must be either of length 1 (for "CS", "HCS", "AR", "HAR", or one of the spatial correlation structures) or of length \(lvls(lvls-1)/2\) (for "UN"), where \(lvls\) denotes the number of levels of the inner factor. Again, a numeric value fixes the corresponding correlation, while NA means that the correlation will be estimated. For example, with struct="CS" and rho=0, the variance-covariance matrix of the inner factor will be diagonal with \(\tau<U+00B2>\) along the diagonal. For struct="UN", the values specified under rho should be given in column-wise order (e.g., for an inner grouping variable/factor with four levels, the order would be \(\rho<U+2081><U+2082>\), \(\rho<U+2081><U+2083>\), \(\rho<U+2082><U+2083>\), \(\rho<U+2081><U+2084>\), \(\rho<U+2082><U+2084>\), \(\rho<U+2083><U+2084>\)).

Similarly, arguments gamma2 and phi are only relevant when the random argument contains a second ~ inner | outer formula. The arguments then work exactly as described above.

Omnibus Test of Parameters

For models including moderators, an omnibus test of all the model coefficients is conducted that excludes the intercept (the first coefficient) if it is included in the model. If no intercept is included in the model, then the omnibus test includes all of the coefficients in the model including the first. Alternatively, one can manually specify the indices of the coefficients to test via the btt argument. For example, with btt=c(3,4), only the third and fourth coefficient from the model would be included in the test (if an intercept is included in the model, then it corresponds to the first coefficient in the model). Instead of specifying the coefficient numbers, one can specify a string for btt. In that case, grep will be used to search for all coefficient names that match the string.

Categorical Moderators

Categorical moderator variables can be included in the model via the mods argument in the same way that appropriately (dummy) coded categorical independent variables can be included in linear models. One can either do the dummy coding manually or use a model formula together with the factor function to let R handle the coding automatically.

Tests and Confidence Intervals

By default, the test statistics of the individual coefficients in the model (and the corresponding confidence intervals) are based on a standard normal distribution, while the omnibus test is based on a chi-square distribution with \(m\) degrees of freedom (\(m\) being the number of coefficients tested). As an alternative, one can set test="t", which slightly mimics the Knapp and Hartung (2003) method by using a t-distribution with \(k-p\) degrees of freedom for tests of individual coefficients and confidence intervals and an F-distribution with \(m\) and \(k-p\) degrees of freedom (\(p\) being the total number of model coefficients including the intercept if it is present) for the omnibus test statistic (but note that test="t" is not the same as test="knha" in rma.uni, as no adjustment to the standard errors of the estimated coefficients is made).

Test for (Residual) Heterogeneity

A test for (residual) heterogeneity is automatically carried out by the function. Without moderators in the model, this test is the generalized/weighted least squares extension of Cochran's \(Q\)-test, which tests whether the variability in the observed effect sizes or outcomes is larger than one would expect based on sampling variability (and the given covariances among the sampling errors) alone. A significant test suggests that the true effects or outcomes are heterogeneous. When moderators are included in the model, this is the \(Q_E\)-test for residual heterogeneity, which tests whether the variability in the observed effect sizes or outcomes that is not accounted for by the moderators included in the model is larger than one would expect based on sampling variability (and the given covariances among the sampling errors) alone.

References

Berkey, C. S., Hoaglin, D. C., Antczak-Bouckoms, A., Mosteller, F., & Colditz, G. A. (1998). Meta-analysis of multiple outcomes by regression with random effects. Statistics in Medicine, 17, 2537--2550.

Gleser, L. J., & Olkin, I. (2009). Stochastically dependent effect sizes. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 357--376). New York: Russell Sage Foundation.

van Houwelingen, H. C., Arends, L. R., & Stijnen, T. (2002). Advanced methods in meta-analysis: Multivariate approach and meta-regression. Statistics in Medicine, 21, 589--624.

Ishak, K. J., Platt, R. W., Joseph, L., Hanley, J. A., & Caro, J. J. (2007). Meta-analysis of longitudinal studies. Clinical Trials, 4, 525--539.

Kalaian, H. A., & Raudenbush, S. W. (1996). A multivariate mixed linear model for meta-analysis. Psychological Methods, 1, 227-235.

Konstantopoulos, S. (2011). Fixed effects and variance components estimation in three-level meta-analysis. Research Synthesis Methods, 2, 61--76.

Nakagawa, S., & Santos, E. S. A. (2012). Methodological issues and advances in biological meta-analysis. Evolutionary Ecology, 26, 1253--1274.

Pinheiro, J. C., & Bates, D. (2000). Mixed-effects models in S and S-PLUS. New York: Springer.

Steiger, J. H. (1980). Tests for comparing elements of a correlation matrix. Psychological Bulletin, 87, 245--251.

Salanti, G., Higgins, J. P. T., Ades, A. E., & Ioannidis, J. P. A. (2008). Evaluation of networks of randomized trials. Statistical Methods in Medical Research, 17, 279--301.

Trikalinos, T. A., & Olkin, I. (2012). Meta-analysis of effect sizes reported at multiple time points: A multivariate approach. Clinical Trials, 9, 610--620.

Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1--48. https://www.jstatsoft.org/v036/i03.

See Also

rma.uni, rma.mh, rma.peto, and rma.glmm for other model fitting functions.

dat.konstantopoulos2011, dat.hasselblad1998, dat.begg1989, dat.berkey1998, dat.fine1993, and dat.ishak2007 for further examples of the use of the rma.mv function.

Examples

Run this code
# NOT RUN {
### calculate log odds ratios and corresponding sampling variances
dat <- escalc(measure="OR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)

### random-effects model using rma.uni()
rma(yi, vi, data=dat)

### random-effects model using rma.mv()
### note: sigma^2 in this model is the same as tau^2 from the previous model
rma.mv(yi, vi, random = ~ 1 | trial, data=dat)

### change data into long format
dat.long <- to.long(measure="OR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)

### set levels of group variable ("exp" = experimental/vaccinated; "con" = control/non-vaccinated)
levels(dat.long$group) <- c("exp", "con")

### set "con" to reference level
dat.long$group <- relevel(dat.long$group, ref="con")

### calculate log odds and corresponding sampling variances
dat.long <- escalc(measure="PLO", xi=out1, mi=out2, data=dat.long)

### bivariate random-effects model using rma.mv()
res <- rma.mv(yi, vi, mods = ~ group, random = ~ group | study, struct="UN", data=dat.long)
res
# }

Run the code above in your browser using DataLab