This gets “leverages” or “hat values” from an object. However, ther is hidden complexity in what this may mean, so care must be used in selecting proper arguments for a given use.
# S3 method for HLfit
hatvalues(model, type = "projection", which = "resid", force=FALSE, ...)
An object of class HLfit
, as returned by the fitting functions in spaMM
.
Character: "projection"
, "std"
, or more cryptic values not documented here.
Character: "resid"
for the traditional leverages of the observations, "ranef"
for random-effect leverages, or "both"
for both.
Boolean: to force recomputation of the leverages even if they are available in the object, for checking purposes.
For consistency with the generic.
A list with separate components resid
(leverages of the observations) and ranef
if which="both"
, and a vector otherwise.
Leverages may have distinct meaning depending on context. The textbook version for linear models is that leverages \((q_i)\) are the diagonal elements of a projection matrix (“hat matrix”), and that they may be used to standardize (“studentize”) residuals as follows. If the residual variance \(\phi\) is known, then the variance of each fitted residual \(\hat{e}_i\) is \(\phi(1-q_i)\). Standardized residuals, all with variance 1, are then \(\hat{e}_i/\)\(\sqrt{}\)\((\phi(1-q_i))\).
This no longer holds exactly with estimated \(\phi\), but if one uses here an unbiased (REML) estimator of \(\phi\), the studentized residuals may still practically have a unit expected variance. By comparison, one expects a distinct bias if one uses an ML estimator of \(\phi\): the expected variance of such standardized residuals is no longer 1. For example, when a simple linear model is fitted by ML, the variance of the fitted residuals is less than \(\phi\), but \(\hat{\phi}\) is downward biased so that residuals standardized by \(\sqrt{}\)\((\phi)\), without any leverage correction, more closely have expected unit variance.
Leverages also appear in expressions for derivatives, with respect to the dispersion parameters, of the logdet(Hessian) term of Laplace approximations for marginal or restricted likelihood (Lee et al. 2006). This provides a basis to generalize the concept of standardizing leverages for ML and REML in mixed-effect models. In particular, in an ML fit, one considers leverages \((q*_i)\) that are no longer the diagonal elements of the projection matrix for the mixed model (they are zero in a simple LM). The generalized standardizing leverages may include corrections for non-Gaussian response, for non-Gaussian random effects, and for taking into account the variation of the GLM weights in the logdet(Hessian) derivatives. Leverages are also defined for the random effects. Which corrections are included depend on the precise method used to fit the model (e.g., EQL vs PQL vs REML).
These distinctions suggest breaking the usual synonymy between “leverages” or “hat values”: the term “hat values” better stands for the diagonal elements of a projection matrix, while “leverages” better stands for the standardizing values.
hatvalues(.,type="std")
returns the standardizing leverages. By contrast, hatvalues(.,type="projection")
will always return hat values from the fitted projection matrix. Note that these values still differs between ML and REML fit because the fitted projection matrix differs between them.
Lee, Y., Nelder, J. A. and Pawitan, Y. (2006) Generalized linear models with random effects: unified analysis via h-likelihood. Chapman & Hall: London.
# NOT RUN {
if (spaMM.getOption("example_maxtime")>0.8) {
data("Orthodont",package = "nlme")
rnge <- (107:108)
# all different:
#
hatvalues(rlfit <- fitme(distance ~ age+(age|Subject),
data = Orthodont, method="REML"))[rnge]
hatvalues(mlfit <- fitme(distance ~ age+(age|Subject),
data = Orthodont))[rnge]
hatvalues(mlfit,type="std")[rnge]
}
# }
Run the code above in your browser using DataLab