ref.grid
objects
"ref.grid"
. They also apply to the class "lsmobj"
, which is an extension of "ref.grid"
.
"summary"(object, infer, level, adjust, by, type, df, null, delta, side, ...)
"predict"(object, type, ...)
"str"(object, ...)
"rbind"(..., deparse.level = 1, adjust = "mvt")
"["(x, i, adjust = "mvt", ...)
"print"(x, ...)
"print"(x, ..., digits = NULL, quote = FALSE, right = TRUE)
"xtable"(x, caption = NULL, label = NULL, align = NULL, digits = 4, display = NULL, auto = FALSE, ...)
"xtable"(x, caption = NULL, label = NULL, align = NULL, digits = 4, display = NULL, auto = FALSE, ...)
"print"(x, type = getOption("xtable.type", "latex"), include.rownames = FALSE, sanitize.message.function = footnotesize, ...)
"plot"(x, y, type, intervals = TRUE, comparisons = FALSE, alpha = 0.05, adjust = "tukey", int.adjust, ...)
"plot"(x, y, horizontal = TRUE, xlab, ylab, layout, ...)
"vcov"(object, ...)
regrid (object, transform = c("response", "log", "none"), inv.log.lbl = "response")
"as.mcmc"(x, names = TRUE, ...)
"ref.grid"
.
infer[1]
is TRUE
.
plot
. See Details.
by
variables.
print.xtable
). This only has an effect if there is a known transformation or link function. "response"
specifies that the inverse transformation be applied. Other valid values are "link"
, "lp"
, and "linear"
; these are equivalent, and request that results be shown for the linear predictor. The default is "link"
, unless the "predict.type"
option is in force; see lsm.options
.Note that type
is also an argument for the print.xtable
method; it is passed to print.xtableList
in the xtable package.
NA
specifies asymptotic results).
by
group in the displayed table).side
. See Details for how the test statistics are defined.-1
, "-"
, code"<", "left", or "nonsuperiority"
); right-tailed (1
, "+"
, ">"
, "right"
, or "noninferiority"
); or two-sided (0
, 2
, "!="
, "two-sided"
, "both"
, "equivalence"
, or "="
).",>
rbind
method, but ignored by its ref.grid
method.TRUE
, confidence intervals are plotted for each estimateTRUE
, comparison arrows are added to the plot, in such a way that the degree to which arrows overlap reflects as much as possible the significance of the comparison of the two estimates.alpha
argument to use in constructing comparison arrows.adjust
setting (see update
). (Note: the adjust
argument in plot
sets the adjust method for the comparison arrows, not the confidence intervals.)"response"
, the inverse transformation is applied to the estimates in the grid; if "log"
, the results are formulated as if the response had been log
-transformed; if "none"
, predictions thereof are on the same scale as in object
, and any internal transformation information is preserved. For compatibility with past versions, transform
may also be logical; TRUE
is taken as "response"
, and FALSE
as "none"
.transform = "log"
, and is used to label the predictions if subsequently summarized with type = "response"
.as.mcmc
result -- e.g., column names of treat A
and treat B
versus just A
and B
. When there is more than one variable involved, the elements of names
are used cyclically.print.data.frame
,
xtableList
, print.xtableList
,
update
, or dotplot
as appropriate. If not specified, appropriate defaults are used. For example, the default layout
is one column of horizontal panels or one row of vertical panels.summary
method for "ref.grid"
objects returns an object of class "summary.ref.grid"
, which extends "data.frame"
. xtable
returns an object of class "xtable.lsm"
, as explained in details. plot
returns an object of class "trellis"
. vcov
returns the covariance matrix of the product of the object's linfct
and bhat
slots. as.mcmc
returns a coda mcmc
object.
misc
slot in object
contains default values for by
, infer
, level
, adjust
, type
, null
, side
, and delta
. These defaults vary depending on the code that created the object. The update
method may be used to change these defaults. In addition, any options set using lsm.options(summary=...) will trump those stored in the object's misc
slot.Transformations and links:
With type="response"
, the transformation assumed can be found in object@misc$tran, and its label, for the summary is in object@misc$inv.lbl. At this time, tran
must be one of the named transformations valid for make.link
. Any $t$ or $z$ tests are still performed on the scale of the linear predictor, not the inverse-transformed one. Similarly, confidence intervals are computed on the linear-predictor scale, then inverse-transformed.
Confidence-limit and P-value adjustments:
The adjust
argument specifies a multiplicity adjustment for tests or confidence intervals. This adjustment always is applied separately to each table or sub-table that you see in the printed output (see the details on rbind
below for how to combine tables). The valid values of adjust
are as follows:
"tukey"
"scheffe"
"sidak"
"bonferroni"
"dunnettx"
"mvt"
. (Available for two-sided cases only.)"mvt"
summary
or confint
methods to the results of as.glht
. In the context of pairwise comparisons or comparisons with a control, this produces exact Tukey or Dunnett adjustments, respectively. However, the algorithm (from the mvtnorm package) uses a Monte Carlo method, so results are not exactly repeatable unless the random-number seed is used (see set.seed
). As the family size increases, the required computation time will become noticeable or even intolerable, making the "tukey"
, "dunnettx"
, or others more attractive."none"
For P-value adjustments only, the other Bonferroni-inequality-based adjustment methods in p.adjust
are also available. If a p.adjust.methods
method other than "bonferroni"
or "none"
is specified for confidence limits, the straight Bonferoni adjustment is used instead.
Also, if an adjustment method is not appropriate (e.g., using "tukey"
with one-sided tests, or with results that are not pairwise comparisons), a more appropriate method (usually "sidak"
) is substituted.
In some cases, confidence and $p$-value adjustments are only approximate -- especially when the degrees of freedom or standard errors vary greatly within the family of tests. The "mvt"
method is always the correct one-step adjustment, but it can be very slow. One may use as.glht
with methods in the multcomp package to obtain non-conservative multi-step adjustments to tests.
xtable
-related methods:
The xtable
methods actually use xtableList
, because of the ability to display messages such as those for P-value adjustments. These methods return an object of class "xtable.lsm"
-- an extension of "xtableList"
. Unlike other xtable
methods, the number of digits defaults to 4; and degrees of freedom and t ratios are always formatted independently of digits
. The print
method uses print.xtableList
, and any ...
arguments are passed there.
rbind
and [
methods:
rbind
can be used to combine two or more reference grids into one. The "["
method for ref.grid
s may be used to obtain a subset. The primary reason for doing this would be to redefine the family of tests to which a P-value adjustment method applies. In rbind
, the variables defined in the objects' grids are merged into one grid, and the returned object has no by variables and the multiplicity adjustment method set to "mvt"
(as this is likely the only appropriate one).
rbind
throws an error if there are any mismatches among the dimensions, fixed-effect coefficients, or covariance matrices.
Non-estimable cases:
When the model is rank-deficient, each row x
of object
's linfct
slot is each checked for estimability. If sum(x*bhat)
is found to be non-estimable, then an NA
is displayed for the estimate (as well as any associated statistics). This check is performed using the orthonormal basis N
in the nbasis
slot for the null space of the rows of the model matrix. Estimability fails when $||Nx||^2 / ||x||^2$ exceeds tol
, which by default is 1e-8
. You may change it via lsm.options
by setting estble.tol
to the desired value.
More on tests:
When delta = 0
, test statistics are of the usual form (estimate - null)/SE, or notationally, $t = (Q - \theta_0)/SE$ where $Q$ is our estimate of $\theta$; then left, right, or two-sided $p$ values are produced.
When delta
is positive, the test statistic depends on side
as follows.
Left-sided (nonsuperiority, $H_0: \theta \ge \theta_0 + \delta$ versus $H_1: \theta < \theta_0 + \delta$): $t = (Q - \theta_0 - \delta)/SE$. The $p$ value is the lower-tail probability.
Right-sided (noninferiority): $H_0: \theta \le \theta_0 - \delta$ versus $H_1: \theta > \theta_0 - \delta$): $t = (Q - \theta_0 + \delta)/SE$. The $p$ value is the upper-tail probability.
Two-sided (equivalence): $H_0: |\theta - \theta_0| \ge \delta$ versus $H_1: |\theta - \theta_0| < \delta$): $t = (|Q - \theta_0| - \delta)/SE$. The $p$ value is the lower-tail probability.
Plots:
The plot
method for "lsmobj"
or "summary.ref.grid"
objects (but not "ref.grid"
objects themselves) produces a plot displaying confidence intervals for the estimates. If any by
variables are in force, the plot is divided into separate panels. These functions use the dotplot
function, and thus require that the lattice package be installed. For "summary.ref.grid"
objects, the ...
arguments in plot
are passed only to dotplot
, whereas for "lsmobj"
objects, the object is updated using ...
before summarizing and plotting.
In plots with comparisons = TRUE
, the resulting arrows are only approximate, and in some cases may fail to accurately reflect the pairwise comparisons of the estimates -- especially when estimates having large and small standard errors are intermingled in just the wrong way.
Re-gridding:
The regrid
function reparameterizes an existing ref.grid
so that its linfct
slot is the identity matrix and its bhat
slot consists of the estimates at the grid points. If transform
is TRUE
, the inverse transform is applied to the estimates. Outwardly, the summary
after applying regrid
is identical to what it was before (using type="response" if transform
is TRUE
). But subsequent contrasts will be conducted on the transformed scale -- which is the reason this function exists. See the example below. In cases where the degrees of freedom depended on the linear function being estimated, the d.f. from the reference grid are saved, and a kind of containment method is substituted in the returned object whereby the calculated d.f. for a new linear function will be the minimum d.f. among those having nonzero coefficients. This is kind of an ad hoc method, and it can over-estimate the degrees of freedom in some cases.
MCMC samplers:
When the object's post.beta
slot is non-trivial, as.mcmc
will return an mcmc
object that can be summarized or plotted using methods in the coda package. Specifically, post.beta
is transformed by post-multiplying by t(linfct)
, creating a sample from the posterior distribution of LS means.
"lsmobj"
class can be found in contrast
, cld
, and glht
. Also, test
and confint
are essentially front-ends for summary
, so additional examples may be found there.
require(lsmeans)
warp.lm <- lm(breaks ~ wool * tension, data = warpbreaks)
warp.rg <- ref.grid(warp.lm)
str(warp.rg)
summary(warp.rg)
summary(warp.rg, by = "wool",
infer = c(TRUE, FALSE), level = .90, adjust = "sidak")
# Do all pairwise comparisons within rows or within columns,
# all considered as one faily of tests:
w.t <- pairs(lsmeans(warp.rg, ~ wool | tension))
t.w <- pairs(lsmeans(warp.rg, ~ tension | wool))
rbind(w.t, t.w)
# Transformed response
sqwarp.rg <- ref.grid(update(warp.lm, sqrt(breaks) ~ .))
summary(sqwarp.rg)
# Back-transformed results - compare with summary of 'warp.rg'
summary(sqwarp.rg, type = "response")
# But differences of sqrts can't be back-transformed
summary(pairs(sqwarp.rg, by = "wool"), type = "response")
# We can do it via regrid
sqwarp.rg2 <- regrid(sqwarp.rg)
summary(sqwarp.rg2) # same as for sqwarp.rg with type = "response"
pairs(sqwarp.rg2, by = "wool")
# Logistic regression
# Reshape the Titanic data
Titan <- do.call("expand.grid", dimnames(Titanic)[-4])
Titan$Died <- matrix(Titanic, ncol=2)
Titan.glm <- glm(Died ~ (Class + Sex + Age)^2,
family = binomial, data = Titan)
Titan.lsm <- lsmeans(Titan.glm, ~ Class|Sex, at = list(Age="Adult"))
summary(Titan.lsm, type="response")
summary(pairs(Titan.lsm), type="response")
# Nonsuperiority test: Is any class no more likely to die than
# the 1st class passengers?
summary(contrast(Titan.lsm, "trt.vs.ctrl1"), delta = 1,
adjust = "none", side = "<")
# Plot 90% CIs on the response scale
plot(Titan.lsm, type = "response", level = .90,
xlab = "Predicted probability of drowning")
Run the code above in your browser using DataLab