Last chance! 50% off unlimited learning
Sale ends in
coef
and vcov
.
For mixed-effects models, the tests are Wald chi-square tests for the fixed effects.linearHypothesis(model, ...)
lht(model, ...)
## S3 method for class 'default':
linearHypothesis(model, hypothesis.matrix, rhs=NULL,
test=c("Chisq", "F"), vcov.=NULL, singular.ok=FALSE, verbose=FALSE, ...)
## S3 method for class 'lm':
linearHypothesis(model, hypothesis.matrix, rhs=NULL,
test=c("F", "Chisq"), vcov.=NULL,
white.adjust=c(FALSE, TRUE, "hc3", "hc0", "hc1", "hc2", "hc4"),
singular.ok=FALSE, ...)
## S3 method for class 'glm':
linearHypothesis(model, ...)
## S3 method for class 'mlm':
linearHypothesis(model, hypothesis.matrix, rhs=NULL, SSPE, V,
test, idata, icontrasts=c("contr.sum", "contr.poly"), idesign, iterms,
check.imatrix=TRUE, P=NULL, title="", singular.ok=FALSE, verbose=FALSE, ...)
## S3 method for class 'polr':
linearHypothesis(model, hypothesis.matrix, rhs=NULL, vcov.,
verbose=FALSE, ...)
## S3 method for class 'linearHypothesis.mlm':
print(x, SSP=TRUE, SSPE=SSP,
digits=getOption("digits"), ...)
## S3 method for class 'lme':
linearHypothesis(model, hypothesis.matrix, rhs=NULL,
vcov.=NULL, singular.ok=FALSE, verbose=FALSE, ...)
## S3 method for class 'mer':
linearHypothesis(model, hypothesis.matrix, rhs=NULL,
vcov.=NULL, test=c("chisq", "F"), singular.ok=FALSE, verbose=FALSE, ...)
## S3 method for class 'svyglm':
linearHypothesis(model, ...)
matchCoefs(model, pattern, ...)
## S3 method for class 'default':
matchCoefs(model, pattern, coef.=coef, ...)
## S3 method for class 'lme':
matchCoefs(model, pattern, ...)
## S3 method for class 'mer':
matchCoefs(model, pattern, ...)
## S3 method for class 'mlm':
matchCoefs(model, pattern, ...)
linearHypothesis
works for models
for which the estimated parameters can be retrieved by coef
and
the corresponding estimated covariance matrix by vcov
. See the
rhs
is a
matrix, defaulting to 0.FALSE
(the default), a model with aliased
coefficients produces an error; if TRUE
, the aliased coefficients
are ignored, and the hypothesis matrix should not have columns for them.
For a multivariate linear modeidata
and
specifying the intra-subject design.TRUE
). Set to FALSE
only if you have already checked that the intra-subject model matrix is
block-ortNULL
and no
intra-subject model is specified, no response-transformation is applied; if
an intra-subject model islinearHypothesis
method for mlm
objects:
optional error sum-of-squares-and-products matrix; if missing,
it is computed from the model. In print
method for
linearHypothesis.mlm
objects"F"
or "Chisq"
,
specifying whether to compute the finite-sample
F statistic (with approximate F distribution) or the large-sample
Chi-squared statistic (with asymptotic Chi-squared distribution). hccm
, or an estimated covariance matrix
for model
. See also white.adjust
.hccm
(instead of using the argument vcov.
). Can be set either to a character value
specifying the type
argument of hccm<
TRUE
, the hypothesis matrix, right-hand-side
vector (or matrix), and estimated value of the hypothesis
are printed to standard output; if FALSE
(the default),
the hypothesis is only printed in symbolic form andlinearHypothesis.mlm
.TRUE
(the default), print the sum-of-squares and
cross-products matrix for the hypothesis and the response-transformation matrix."anova"
which contains the residual degrees of freedom
in the model, the difference in degrees of freedom, Wald statistic
(either "F"
or "Chisq"
), and corresponding p value.
For a multivariate linear model, an object of class
"linearHypothesis.mlm"
, which contains sums-of-squares-and-product
matrices for the hypothesis and for error, degrees of freedom for the
hypothesis and error, and some other information.
The returned object normally would be printed.linearHypothesis
computes either a finite-sample F statistic or asymptotic Chi-squared
statistic for carrying out a Wald-test-based comparison between a model
and a linearly restricted model. The default method will work with any
model object for which the coefficient vector can be retrieved by
coef
and the coefficient-covariance matrix by vcov
(otherwise
the argument vcov.
has to be set explicitly). For computing the
F statistic (but not the Chi-squared statistic) a df.residual
method needs to be available. If a formula
method exists, it is
used for pretty printing.
The method for "lm"
objects calls the default method, but it
changes the default test to "F"
, supports the convenience argument
white.adjust
(for backwards compatibility), and enhances the output
by the residual sums of squares. For "glm"
objects just the default
method is called (bypassing the "lm"
method). The svyglm
method
also calls the default method.
The function lht
also dispatches to linearHypothesis
.
The hypothesis matrix can be supplied as a numeric matrix (or vector),
the rows of which specify linear combinations of the model coefficients,
which are tested equal to the corresponding entries in the right-hand-side
vector, which defaults to a vector of zeroes.
Alternatively, the
hypothesis can be specified symbolically as a character vector with one
or more elements, each of which gives either a linear combination of
coefficients, or a linear equation in the coefficients (i.e., with both
a left and right side separated by an equals sign). Components of a
linear expression or linear equation can consist of numeric constants, or
numeric constants multiplying coefficient names (in which case the number
precedes the coefficient, and may be separated from it by spaces or an
asterisk); constants of 1 or -1 may be omitted. Spaces are always optional.
Components are separated by plus or minus signs. See the examples below.
A linear hypothesis for a multivariate linear model (i.e., an object of
class "mlm"
) can optionally include an intra-subject transformation matrix
for a repeated-measures design.
If the intra-subject transformation is absent (the default), the multivariate
test concerns all of the corresponding coefficients for the response variables.
There are two ways to specify the transformation matrix for the
repeated measures:
P
argument.idata
, with default contrasts given by theicontrasts
argument. An intra-subject model-matrix is generated from the one-sided formula
specified by theidesign
argument; columns of the model matrix
corresponding to different terms in the intra-subject model must be orthogonal
(as is insured by the default contrasts). Note that the contrasts given inicontrasts
can be overridden by assigning specific contrasts to the
factors inidata
.
The repeated-measures transformation matrix consists of the
columns of the intra-subject model matrix corresponding to the term or terms
initerms
. In most instances, this will be the simpler approach, and
indeed, most tests of interests can be generated automatically via theAnova
function.matchCoefs
is a convenience function that can sometimes help in formulating hypotheses; for example
matchCoefs(mod, ":")
will return the names of all interaction coefficients in the model mod
.anova
, Anova
, waldtest
,
hccm
, vcovHC
, vcovHAC
,
coef
, vcov
mod.davis <- lm(weight ~ repwt, data=Davis)
## the following are equivalent:
linearHypothesis(mod.davis, diag(2), c(0,1))
linearHypothesis(mod.davis, c("(Intercept) = 0", "repwt = 1"))
linearHypothesis(mod.davis, c("(Intercept)", "repwt"), c(0,1))
linearHypothesis(mod.davis, c("(Intercept)", "repwt = 1"))
## use asymptotic Chi-squared statistic
linearHypothesis(mod.davis, c("(Intercept) = 0", "repwt = 1"), test = "Chisq")
## the following are equivalent:
## use HC3 standard errors via white.adjust option
linearHypothesis(mod.davis, c("(Intercept) = 0", "repwt = 1"),
white.adjust = TRUE)
## covariance matrix *function*
linearHypothesis(mod.davis, c("(Intercept) = 0", "repwt = 1"), vcov = hccm)
## covariance matrix *estimate*
linearHypothesis(mod.davis, c("(Intercept) = 0", "repwt = 1"),
vcov = hccm(mod.davis, type = "hc3"))
mod.duncan <- lm(prestige ~ income + education, data=Duncan)
## the following are all equivalent:
linearHypothesis(mod.duncan, "1*income - 1*education = 0")
linearHypothesis(mod.duncan, "income = education")
linearHypothesis(mod.duncan, "income - education")
linearHypothesis(mod.duncan, "1income - 1education = 0")
linearHypothesis(mod.duncan, "0 = 1*income - 1*education")
linearHypothesis(mod.duncan, "income-education=0")
linearHypothesis(mod.duncan, "1*income - 1*education + 1 = 1")
linearHypothesis(mod.duncan, "2income = 2*education")
mod.duncan.2 <- lm(prestige ~ type*(income + education), data=Duncan)
coefs <- names(coef(mod.duncan.2))
## test against the null model (i.e., only the intercept is not set to 0)
linearHypothesis(mod.duncan.2, coefs[-1])
## test all interaction coefficients equal to 0
linearHypothesis(mod.duncan.2, coefs[grep(":", coefs)], verbose=TRUE)
linearHypothesis(mod.duncan.2, matchCoefs(mod.duncan.2, ":"), verbose=TRUE) # equivalent
## a multivariate linear model for repeated-measures data
## see ?OBrienKaiser for a description of the data set used in this example.
mod.ok <- lm(cbind(pre.1, pre.2, pre.3, pre.4, pre.5,
post.1, post.2, post.3, post.4, post.5,
fup.1, fup.2, fup.3, fup.4, fup.5) ~ treatment*gender,
data=OBrienKaiser)
coef(mod.ok)
## specify the model for the repeated measures:
phase <- factor(rep(c("pretest", "posttest", "followup"), c(5, 5, 5)),
levels=c("pretest", "posttest", "followup"))
hour <- ordered(rep(1:5, 3))
idata <- data.frame(phase, hour)
idata
## test the four-way interaction among the between-subject factors
## treatment and gender, and the intra-subject factors
## phase and hour
linearHypothesis(mod.ok, c("treatment1:gender1", "treatment2:gender1"),
title="treatment:gender:phase:hour", idata=idata, idesign=~phase*hour,
iterms="phase:hour")
## mixed-effects models examples:
library(nlme)
example(lme)
linearHypothesis(fm2, "age = 0")
library(lme4)
example(lmer)
linearHypothesis(gm1, matchCoefs(gm1, "period"))
Run the code above in your browser using DataLab