summary
method for class "manyglm".## S3 method for class 'manyglm':
summary(object, resamp="pit.trap", test="wald",
p.uni="none", nBoot=1000, cor.type=object$cor.type,
show.cor = FALSE, show.est=FALSE, show.residuals=FALSE,
symbolic.cor = FALSE, show.time=FALSE, show.warning=FALSE,\dots)
## S3 method for class 'summary.manyglm':
print(x, \dots)
manyglm
, typically the result of a call to
manyglm
.cor.type="I"
, this can be
one of "wald" for a Wald-Test or "score" for a Score-Test or "LR" for a
Likelihood-Ratio-Test, otherwise only "wald" and "score" is allowed. The
default value is "LR".nBoot=999
.TRUE
,
the correlation matrix of the estimated parameters, or the estimated model
parameters, or the residual summary is shown.TRUE
, the correlation is printed in a symbolic form (see
symnum
) rather than in numerical format.summary.manyglm
method, these are
additional arguments including:
rep.seed
- logical. Whether to fix
random seed in resampling data. Useful for simulation or diagnostic
purposes.
bootID
- this matrix shoulsummary.manyglm
.object
.object
.object
.object
.object
.object
.manyglm
for the estimation
of the model parameters.manyglm
object,
if applicablemanyglm
. Either "glm.fit"
or
"manyglm.fit"
manyglm
.stat.iter
is set to TRUE
the test statistics in the resampling iterations.stat.iter
is set to TRUE
the univariate test statistics in the resampling iterations.NULL
.stat.iter
is set to TRUE
the test statistics of the overall tests in the resampling iterations.stat.iter
is set to TRUE
the univariate test statistics of the overall tests
in the resampling iterations.dispersion = 1
) estimated covariance matrix of the
estimated coefficients.dispersion
.show.cor = TRUE
.) The estimated correlations
of the estimated coefficients.show.cor = TRUE
.) The value of the argument symbolic.cor
.summary.manyglm
function returns a table summarising the
statistical significance of each multivariate term specified in the fitted
manyglm model (Warton (2011)). For each model term, it returns a test
statistic as determined by the argument test
, and a P-value calculated
by resampling rows of the data using a method determined by the argument
resamp
. Of the four possible resampling methods, three (case, residual
permutation and parametric boostrap) are described in more detail in Davison
and Hinkley (1997, chapter 6), but the default (PIT-trap) is a new method (in
review) which bootstraps probability integral transform residuals, and which
we have found to give the most reliable Type I error rates. All methods
involve resampling under the alternative hypothesis. These methods ensure
approximately valid inference even when the mean-variance relationship or the
correlation between variables has been misspecified. Standardized pearson
residuals (see manyglm
are currently used in residual
permutation, and where necessary, resampled response values are truncated so
that they fall in the required range (e.g. counts cannot be negative).
However, this can introduce bias, especially for family=binomial
, so
we advise extreme caution using perm.resid
for presence/absence data.
If resamp="none"
, p-values cannot be calculated, however the test
statistics are returned.
If you have a specific hypothesis of primary interest that you want to test, then you should use the anova.manyglm
function, which can resample rows of the data under the null hypothesis and so usually achieves a better approximation to the true significance level.
For information on the different types of data that can be modelled using manyglm, see manyglm
. To check model assumptions, use plot.manyglm
.
Multivariate test statistics are constructed using one of three methods: a log-likelihood ratio statistic test="LR"
, for example as in Warton et. al. (2012), or a Wald statistic test="wald"
or a Score statistic test="score"
. "LR" has good properties, but is only available when cor.type="I"
.
The default Wald test statistic makes use of a generalised estimating equations (GEE) approach, estimating the covariance matrix of parameter estimates using a sandwich-type estimator that assumes the mean-variance relationship in the data is correctly specified and that there is an unknown but constant correlation across all observations. Such assumptions allow the test statistic to account for correlation between variables but to do so in a more efficient way than traditional GEE sandwich estimators (Warton 2008a). The common correlation matrix is estimated from standardized Pearson residuals, and the method specified by cor.type
is used to adjust for high dimensionality.
The Wald statistic has problems for count data and presence-absence
data when there are zero parameters, so is not recommended for multi-sample
tests, where such situations are common.
The summary.manyglm
function is designed specifically for high-dimensional data (that, is when the number of variables p is not small compared to the number of observations N). In such instances a correlation matrix is computationally intensive to estimate and is numerically unstable, so by default the test statistic is calculated assuming independence of variables (cor.type="I"
). Note however that the resampling scheme used ensures that the P-values are approximately correct even when the independence assumption is not satisfied. However if it is computationally feasible for your dataset, it is recommended that you use cor.type="shrink"
to account for correlation between variables, or cor.type="R"
when p is small. The cor.type="R"
option uses the unstructured correlation matrix (only possible when N>p), such that the standard classical multivariate test statistics are obtained. Note however that such statistics are typically numerically unstable and have low power when p is not small compared to N.
The cor.type="shrink"
option applies ridge regularisation (Warton (2008b)), shrinking the sample correlation matrix towards the identity, which improves its stability when p is not small compared to N. This provides a compromise between "R"
and "I"
, allowing us to account for correlation between variables, while using a numerically stable test statistic that has good properties.
The shrinkage parameter is an attribute of the manyglm
object. For a Wald test, the sample correlation matrix of the alternative model is used to calculate the test statistics. So object$shrink.param
is used. For a Score test, the sample correlation matrix of the null model is used to calculate the test statistics. So shrink.param
of the null model is used instead. If cor.type=="shrink"
but object$shrink.param
is not available, for example object$cor.type!="shrink"
, then the shrinkage parameter will be estimated by cross-validation using the multivariate normal likelihood function (see ridgeParamEst
and (Warton 2008b)) in the summary test.
Rather than stopping after testing for multivariate effects, it is often of interest to find out which response variables express significant effects. Univariate statistics are required to answer this question, and these are reported if requested. Setting p.uni="unadjusted"
returns resampling-based univariate P-values for all effects as well as the multivariate P-values, whereas p.uni="adjusted"
returns adjusted P-values (that have been adjusted for multiple testing), calculated using a step-down resampling algorithm as in Westfall & Young (1993, Algorithm 2.8). This method provides strong control of family-wise error rates, and makes use of resampling (using the method controlled by resamp
) to ensure inferences take into account correlation between variables.manyglm
, anova.manyglm
.data(spider)
spiddat <- mvabund(spider$abund)
X <- spider$x
## Estimate the coefficients of a multivariate glm
glm.spid <- manyglm(spiddat[,1:3]~X, family="negative.binomial")
## Estimate the statistical significance of different multivariate terms in
## the model, using the default settings of LR test, and 100 PIT-trap resamples
summary(glm.spid, show.time=TRUE)
## Repeat with the parametric bootstrap and wald statistics
summary(glm.spid, resamp="monte.carlo", test="wald", nBoot=300)
Run the code above in your browser using DataLab