Certain objects are affected by optional arguments to functions that construct ref.grid
or lsmobj
objects, including ref.grid
, lsmeans
, lstrends
, and lsmip
. When arguments are mentioned in the subsequent object-by-object documentation, we are talking about arguments in these constructors.
Additional models can be supported by writing appropriate recover.data
and lsm.basis
methods. See extending-lsmeans
and vignette("extending")
for details.
lm
support often extends to a number of model objects that inherit from it, such as rlm
in the MASS package and rsm
in the rsm package.rep.meas
by default. The mult.name
argument may be used to change this name. The mult.levs
argument may specify a named list of one or more sets of levels. If this has more than one element, then the multivariate levels are expressed as combinations of the named factor levels via the function expand.grid
.contrasts
attribute of all factors should be of a type that sums to zero -- for example, "contr.sum"
, "contr.poly"
, or "contr.helmert"
but not "contr.treatment"
. Only intra-block estimates of covariances are used. That is, if a factor appears in more than one error stratum, only the covariance structure from its lowest stratum is used in estimating standard errors. Degrees of freedom are obtained using the Satterthwaite method. In general, aovList
support is best with balanced designs, and due caution in the use of contrasts. If a vcov.
argument is supplied, it must yield a single covariance matrix for the unique fixed effects, and the degrees of freedom are set to NA
.mixed
objects has been removed. Version 0.14 and later of afex provides new object classes with their own lsmeans support.mode
argument has possible values of "response"
, "link"
, "precision"
, "phi.link"
, "variance"
, and "quantile"
, which have the same meaning as the type
argument in predict.betareg
-- with the addition that "phi.link"
is like "link"
, but for the precision portion of the model. When mode = "quantile"
is specified, the additional argument quantile
(a numeric scalar or vector) specifies which quantile(s) to compute; the default is 0.5 (the median). Also in "quantile"
mode, an additional variable quantile
is added to the reference grid, and its levels are the values supplied.
data
argument) the dataset used in fitting the model. As with other MCMC-based objects, the summaries and such are frequentist, but the as.mcmc
method provides a posterior sample of the desired quantities.gam
objects are not supported. Past versions of lsmeans appeared to support gam
models owing to inheritance from lm
, but the results were incorrect because spline features were ignored. We now explicitly trap gam
objects to avoid tyhese misleading analyses.vcov.method
argument. It is partially matched with the available choices; thus, for example, vcov = "n" translates to vcov.method = "naive"
vcov.method
as "robust"
(the default) and "naive"
.vcov.method
as "vbeta"
(the default), "vbeta.naiv"
, "vbeta.j1s"
, or "vbeta.fij"
. The aliases "robust"
(for "vbeta"
) and "naive"
(for "vbeta.naiv"
are also accepted.vcov.method
, it is interpreted as a vcov.
specification as described for ...
in ref.grid
.ddf_Lb
function, and the covariance matrix is adjusted using vcovAdj
.
If pbkrtest is not installed, the covariance matrix is not adjusted, degrees of freedom are set to NA
, and asymptotic results are displayed. The user may disable the use of pbkrtest via lsm.options(disable.pbkrtest=TRUE) (this does not disable the pbkrtest package entirely, just its use in lsmeans). The df
argument may be used to specify some other degrees of freedom. Specifying df
is not equivalent to disabling pbkrtest, because if not disabled, the covariance matrix is still adjusted. On a related matter: for very large objects, computation time or memory use may be excessive. The amount required depends roughly on the number of observations, N, in the design matrix (because a major part of the computation involves inverting an N x N matrix). Thus, pbkrtest is automatically disabled if N exceeds the value of get.lsm.option("pbkrtest.limit")
. If desired, the user may use lsm.options
to adjust this limit from the default of 3000.lme
in the nlme package.glm
.mode
and rescale
(which defaults to c(0,1)). For details, see the documentation below regarding the support for the ordinal package, which produces comparable objects (but since polr
does not support scale models, mode="scale"
is not supported).
Tests and confidence intervals are asymptotic.lm
.data
argument. In addition, the contrasts
specifications are not recoverable from the object, so the system default must match what was actually used in fitting the model. The usual summary
, test
, etc. methods provide frequentist analyses of the results based on the posterior means and covariances. However, an as.mcmc
method is provided that creates an mcmc
object that can be summarized or plotted using the coda package. It provides a posterior sample of lsmeans for the given reference grid, based on the posterior sample of the fixed effects from the MCMCglmm
object.mcmc
, and contain a sample from the posterior distribution of fixed-effect coefficients. In some cases (e.g., results of MCMCregress
and MCMCpoisson
), the object may include a "call"
attribute that lsmeans
can use to reconstruct the data and obtain a basis for the least-squares means. If not, a formula
and data
argument are provided that may help produce the right results. In addition, the contrasts
specifications are not recoverable from the object, so the system default must match what was actually used in fitting the model. As for other MCMC-based objects, the summaries and such are frequentist, but the as.mcmc
method provides a posterior sample of the desired quantities.N - p
in object$dims
. This is consistent with nlme:::summary.gls
but seems questionable.object$fixDF$X
receiving nonzero weight (but with a correction to the lme
object's intercept df). (This is similar to SAS's containment method, but I believe SAS does it incorrectly when the estimands are not contrasts.) The optional argument sigmaAdjust
(defaults to TRUE
) will adjust standard errors like in summary.lme
when the model is fitted using the "ML"
method. Note: sigmaAdjust
is comparable to adjustSigma
in summary.lme
but it is renamed to avoid conflicting with adjust
.fixed
part of the model. The user must specify param
in the call and give the name of a parameter that appears in the right-hand side of a fixed
formula. Degrees of freedom are obtained using the containment-like method described above for lme
.mode
argument which should match "prob"
or "latent"
. With mode = "prob"
, the reference-grid predictions consist of the estimated multinomial probabilities. The "latent"
mode returns the linear predictor, recentered so that it averages to zero over the levels of the response variable (similar to sum-to-zero contrasts). Thus each latent variable can be regarded as the log probability at that level minus the average log probability over all levels. Please note that, because the probabilities sum to 1 (and the latent values sum to 0) over the multivariate-response levels, all sensible results from lsmeans
must involve that response as one of the factors. For example, if resp
is a response with $k$ levels, lsmeans(model, ~ resp | trt)
will yield the estimated multinomial distribution for each trt
; but lsmeans(model, ~ trt)
will just yield the average probability of $1/k$ for each trt
.
scale
or nominal
models. There are two optional arguments: mode
(a character string) and rescale
(which defaults to c(0,1)). mode
should match one of "latent"
(the default), "linear.predictor"
, "cum.prob"
, "exc.prob"
, "prob"
, "mean.class"
, or "scale"
. With mode = "latent", the reference-grid predictions are made on the scale of the latent variable implied by the model. The scale and location of this latent variable are arbitrary, and may be altered via rescale
. The predictions are multiplied by rescale[2], then rescale[1] is added. Keep in mind that the scaling is related to the link function used in the model; for example, changing from a probit link to a logistic link will inflate the latent values by around $pi/sqrt(3)$, all other things being equal. rescale
has no effect for other values of mode
. With mode = "linear.predictor" mode = "cum.prob"
, and mode = "exc.prob"
, the boundaries between categories (i.e., thresholds) in the ordinal response are included in the reference grid as a pseudo-factor named cut
. The reference-grid predictions are then of the cumulative probabilities at each threshold (for mode = "cum.prob"
), exceedance probabilities (one minus cumulative probabilities, for mode = "exc.prob"
), or the link function thereof (for mode = "linear.predictor"
). With mode = "prob"
, a pseudo-factor with the same name as the model's response variable is created, and the grid predictions are of the probabilities of each class of the ordinal response. With "mean.class"
, the returned results are means of the ordinal response, interpreted as a numeric value from 1 to the number of classes, using the "prob"
results as the estimated probability distribution for each case. With mode = "scale"
, and the fitted object incorporates a scale model, least-squares means are obtained for the factors in the scale model instead of the response model. The grid is constructed using only the factors in the scale model. Any grid point that is non-estimable by either the location or the scale model (if present) is set to NA
, and any LS-means involving such a grid point will also be non-estimable. A consequence of this is that if there is a rank-deficient scale
model, and then all latent responses become non-estimable because the predictions are made using the average log-scale estimate. Tests and confidence intervals are asymptotic.mode
and lin.pred
-- are provided. The mode
argument has possible values "response"
(the default), "count"
, "zero"
, or "prob0"
. lin.pred
is logical and defaults to FALSE
. With lin.pred = FALSE
, the results are comparable to those returned by predict(..., type = "response")
, predict(..., type = "count")
, predict(..., type = "zero")
, or predict(..., type = "prob")[, 1]
. See the documentation for predict.hurdle
and predict.zeroinfl
. The option lin.pred = TRUE
only applies to mode = "count"
and mode = "zero"
. The results returned are on the linear-predictor scale, with the same transformation as the link function in that part of the model. The predictions for a reference grid with mode = "count"
, lin.pred = TRUE
, and type = "response"
will be the same as those obtained with lin.pred = FALSE
and mode = "count"
; however, any LS means derived from these grids will be defferent, because the averaging is done on the log-count scale and the actual count scale, respectively -- thereby producing geometric means versus arithmetic means of the predictions. If the vcov.
argument is used (see details in ref.grid
), it must yield a matrix of the same size as would be obtained using vcov.hurdle
or vcov.zeroinfl
with its model
argument set to ("full", "count", "zero")
in respective correspondence with mode
of ("mean", "count", "zero")
.
If vcov.
is a function, it must support the model
argument.
contrast
methods, and whichever package is loaded later masks the other. Thus, you may need to call lsmeans::contrast
or rms::contrast
explicitly to access the one you want.
orm
), a mode
argument is provided that works similarly to that for the ordinal package. The available modes are "middle"
(the default), "latent"
, "linear.predictor"
, "cum.prob"
, "exc.prob"
, "prob"
, and "mean.class"
. All are as described for the ordinal package, except as noted below. With mode = "middle"
(this is the default), the middle intercept is used, comparable to the default for rms's Predict
function. This is quite similar in concept to mode = "latent"
, where all intercepts are averaged together. Results for mode = "linear.predictor"
are reversed from those in the ordinal package, because orm
models predict the link function of the upper-tail (exceedance) probabilities. With mode = "prob"
, a pseudo-factor is created having the same name as the model response variable, but its levels are always integers 1, 2, ... regardless of the levels of the original response.
ref.grid
, lsm.basis