## S3 method for class 'manyglm':
anova(object, \dots, resamp="pit.trap", test="LR",
p.uni="none", nBoot=1000, cor.type=object$cor.type,
show.time="total", show.warning=FALSE, rep.seed=FALSE, bootID=NULL)
## S3 method for class 'anova.manyglm':
print(x, \dots)
manyglm
, typically the result of a call to manyglm
.anova.manyglm
method, these are optional further objects of class manyglm
, which are usually a result of a call to manyglm
for the print.anova.manyglm
method cor.type="I"
, this can be one of "wald" for a Wald-Test or "score" for a Score-Test or "LR" for a Likelihood-Ratio-Test, otherwise only "wald" and "score" is allowed. The default value is "LR".nBoot=999
.bootID
is supplied, nBoot
is set to the number of rows in bootID
. Default is NULL
.summary.manyglm
.family
component from object
.p.uni
argument supplied.test
argument supplied.cor.type
argument supplied.resamp
argument supplied.nBoot
argument supplied.manyglm
objects in the anova test.p.uni="adjusted"
or "unadjusted"
the output list also containsanova.manyglm
will only be valid if they are fitted to the same dataset. This may be a problem if there are missing values and R's default of na.action = na.omit
is used.anova.manyglm
function returns a table summarising the statistical significance of a fitted manyglm model (Warton 2011), or of the differences between several nested models. If one model is specified, sequential test statistics (and P values) are returned for that fit. If more than one object is specified, the table contains test statistics (and P values) comparing their fits, provided that the models are fitted to the same dataset.
The test statistics are determined by the argument test
, and the
P-values are calculated by resampling rows of the data using a method
determined by the argument resamp
. resamp
. Two of the three
available resampling methods (residual permutation and parametric bootstrap)
are described in more detail in Davison and Hinkley (1997, chapter 6),
whereas the default (the ``PIT-trap'') is a new method (in review) which
bootstraps probability integral transform residuals, and which we have found
to give the most reliable Type I error rates. All methods involve resampling
under the resampling under the null hypothesis. These methods ensure
approximately valid inference even when the mean-variance relationship or the
correlation between variables has been misspecified. Standardized Pearson
residuals (see manyglm
are currently used in residual
permutation, and where necessary, resampled response values are truncated so
that they fall in the required range (e.g. counts cannot be negative).
However, this can introduce bias, especially for family=binomial
, so
we advise extreme caution using perm.resid
for presence/absence data.
If resamp="none"
, p-values cannot be calculated, however the test
statistics are returned.
If you do not have a specific hypothesis of primary interest that you want to test, and are instead interested in which model terms are statistically significant, then the summary.manyglm
function is more appropriate. Whereas summary.manyglm
tests the significance of each explanatory variable, anova.manyglm
, given one manyglm
object tests each term of the formula, e.g. if the formula is 'y~a+b' then a and b, that can be vectors or matrices, are tested for significance.
For information on the different types of data that can be modelled using manyglm, see manyglm
. To check model assumptions, use plot.manyglm
.
Multivariate test statistics are constructed using one of three methods: a log-likelihood ratio statistic test="LR"
, for example as in Warton et. al. (2012) or a Wald statistic test="wald"
or a Score statistic test="score"
. "LR" has good properties, but is only available when cor.type="I"
.
The default Wald test statistic makes use of a generalised estimating equations (GEE) approach, estimating the covariance matrix of parameter estimates using a sandwich-type estimator that assumes the mean-variance relationship in the data is correctly specified and that there is an unknown but constant correlation across all observations. Such assumptions allow the test statistic to account for correlation between variables but to do so in a more efficient way than traditional GEE sandwich estimators (Warton 2011). The common correlation matrix is estimated from standardized Pearson residuals, and the method specified by cor.type
is used to adjust for high dimensionality.
The Wald statistic has problems for count data and presence-absence
data when there are zero parameters, so is not recommended for multi-sample
tests, where such situations are common.
The anova.manyglm
function is designed specifically for high-dimensional data (that, is when the number of variables p is not small compared to the number of observations N). In such instances a correlation matrix is computationally intensive to estimate and is numerically unstable, so by default the test statistic is calculated assuming independence of variables (cor.type="I"
). Note however that the resampling scheme used ensures that the P-values are approximately correct even when the independence assumption is not satisfied. However if it is computationally feasible for your dataset, it is recommended that you use cor.type="shrink"
to account for correlation between variables, or cor.type="R"
when p is small. The cor.type="R"
option uses the unstructured correlation matrix (only possible when N>p), such that the standard classical multivariate test statistics are obtained. Note however that such statistics are typically numerically unstable and have low power when p is not small compared to N.
The cor.type="shrink"
option applies ridge regularisation (Warton 2008), shrinking the sample correlation matrix towards the identity, which improves its stability when p is not small compared to N. This provides a compromise between "R"
and "I"
, allowing us to account for correlation between variables, while using a numerically stable test statistic that has good properties.
The shrinkage parameter is an attribute of a manyglm
object. For a Wald test, the sample correlation matrix of the alternative model is used to calculate the test statistics. So shrink.param
of the alternative model is used. For a score test, the sample correlation matrix of the null model is used to calculate the test statistics. So shrink.param
of the null model is used instead. If cor.type=="shrink"
and shrink.param
is NULL, then the shrinkage parameter will be estimated by cross-validation using the multivariate normal likelihood function (see ridgeParamEst
and (Warton 2008)) for the corresponding model in the anova test.
Rather than stopping after testing for multivariate effects, it is often of interest to find out which response variables express significant effects. Univariate statistics are required to answer this question, and these are reported if requested. Setting p.uni="unadjusted"
returns resampling-based univariate P-values for all effects as well as the multivariate P-values, whereas p.uni="adjusted"
returns adjusted P-values (that have been adjusted for multiple testing), calculated using a step-down resampling algorithm as in Westfall & Young (1993, Algorithm 2.8). This method provides strong control of family-wise error rates, and makes use of resampling (using the method controlled by resamp
) to ensure inferences take into account correlation between variables.manyglm
, summary.manyglm
.## Load the Tasmania data set
data(Tasmania)
## Visualise the effect of treatment on copepod abundance
tasm.cop <- mvabund(Tasmania$copepods)
treatment <- Tasmania$treatment
block <- Tasmania$block
#plot(tasm.cop ~ treatment, col=as.numeric(block))
## Fitting predictive models using a negative binomial model for counts:
tasm.cop.nb <- manyglm(tasm.cop ~ block*treatment, family="negative.binomial")
## Testing hypotheses about the treatment effect and treatment-by-block interactions,
## using the default settings of parametric resampling and deviance as test statistics:
anova(tasm.cop.nb, nBoot=200, test="wald")
Run the code above in your browser using DataLab