Learn R Programming

glmmML (version 0.65-2)

glmmML: Generalized Linear Models with random intercept

Description

Fits GLMs with random intercept by Maximum Likelihood and numerical integration via Gauss-Hermite quadrature.

Usage

glmmML(formula, family = binomial, data, cluster, weights, subset, na.action, 
offset, prior = c("gaussian", "logistic", "cauchy"),
start.coef = NULL, start.sigma = NULL, fix.sigma = FALSE, 
control = list(epsilon = 1e-08, maxit = 200, trace = FALSE),
method = c("Laplace", "ghq"), n.points = 1, boot = 0)

Arguments

formula
a symbolic description of the model to be fit. The details of model specification are given below.
family
Currently, the only valid values are binomial and poisson. The binomial family allows for the logit and cloglog links.
data
an optional data frame containing the variables in the model. By default the variables are taken from `environment(formula)', typically the environment from which `glmmML' is called.
cluster
Factor indicating which items are correlated.
weights
Case weights. Defaults to one.
subset
an optional vector specifying a subset of observations to be used in the fitting process.
na.action
See glm.
start.coef
starting values for the parameters in the linear predictor. Defaults to zero.
start.sigma
starting value for the mixing standard deviation. Defaults to 0.5.
fix.sigma
Should sigma be fixed at start.sigma?
offset
this can be used to specify an a priori known component to be included in the linear predictor during fitting.
prior
Which "prior" distribution (for the random effects)? Possible choices are "gaussian" (default), "logistic", and "cauchy".
control
Controls the convergence criteria. See glm.control for details.
method
There are two choices "Laplace" (default) and "ghq" (Gauss-Hermite).
n.points
Number of points in the Gauss-Hermite quadrature. If n.points == 1, the Gauss-Hermite is the same as Laplace approximation.
boot
Do you want a bootstrap estimate of cluster effect? The default is No (boot = 0). If you want to say yes, enter a positive integer here. It should be equal to the number of bootstrap samples you want to draw. A recomended

Value

  • The return value is a list, an object of class 'glmmML'. The components are:
  • bootNo. of boot replicates
  • convergedLogical
  • coefficientsEstimated regression coefficients
  • coef.sdTheir standard errors
  • sigmaThe estimated random effects' standard deviation
  • sigma.sdIts standard error
  • varianceThe estimated variance-covariance matrix. The last column/row corresponds to the log of the standard deviation of the random effects (log(sigma))
  • aicAIC
  • bootPBootstrap p value from testing the null hypothesis of no random effect (sigma = 0)
  • devianceDeviance
  • mixedLogical
  • df.residualDegrees of freedom
  • cluster.null.devianceDeviance from a glm with no clustering
  • cluster.null.dfIts degrees of freedom
  • posterior.modesEstimated posterior modes of the random effects
  • termsThe terms object
  • infoFrom hessian inversion. Should be 0. If not, no variances could be estimated. You could try fixing sigma at the estimated value and rerun.
  • priorWhich prior was used?
  • callThe function call

encoding

UTF-8

Details

The integrals in the log likelihood function are evaluated by the Laplace approximation (default) or Gauss-Hermite quadrature. The latter is now fully adaptive; however, only approximate estimates of variances are available for the Gauss-Hermite (n.points > 1) method.

For the binomial families, the response can be a two-column matrix, see the help page for glm for details.

References

Broström (2003). Generalized linear models with random intercepts. http://www.stat.umu.se/forskning/reports/glmmML.pdf

See Also

glmmboot, glm, optim, glmm in Lindsey's repeated package, lmer in Matrixand glmmPQL in MASS.

Examples

Run this code
id <- factor(rep(1:20, rep(5, 20)))
y <- rbinom(100, prob = rep(runif(20), rep(5, 20)), size = 1)
x <- rnorm(100)
dat <- data.frame(y = y, x = x, id = id)
glmmML(y ~ x, data = dat, cluster = id)

Run the code above in your browser using DataLab