powered by
This function is called by glmmML, but it can also be called directly by the user.
glmmML
glmmML.fit(X, Y, weights = rep(1, NROW(Y)), cluster.weights = rep(1, NROW(Y)), start.coef = NULL, start.sigma = NULL, fix.sigma = FALSE, cluster = NULL, offset = rep(0, nobs), family = binomial(), method = 1, n.points = 1, control = list(epsilon = 1.e-8, maxit = 200, trace = FALSE), intercept = TRUE, boot = 0, prior = 0)
Design matrix of covariates.
Response vector. Or two-column matrix.
Case weights. Defaults to one.
Cluster weights. Defaults to one.
Starting values for the coefficients.
Starting value for the mixing standard deviation.
Should sigma be fixed at start.sigma?
The clustering variable.
The offset in the model.
Family of distributions. Defaults to binomial with logit link. Other possibilities are binomial with cloglog link and poisson with log link.
Laplace (1) or Gauss-hermite (0)?
Number of points in the Gauss-Hermite quadrature. Default is n.points = 1, which is equivalent to Laplace approximation.
n.points = 1
Control of the iterations. See glm.control.
glm.control
Logical. If TRUE, an intercept is fitted.
Integer. If > 0, bootstrapping with boot replicates.
boot
Which prior distribution? 0 for "gaussian", 1 for "logistic", 2 for "cauchy".
A list. For details, see the code, and glmmML.
In the optimisation, "vmmin" (in C code) is used.
Brostr<U+001ADF7B>Bridgewater
glmmML, glmmPQL in the package MASS, and lmer in the package lme4.
glmmPQL
MASS
lmer
lme4
# NOT RUN { x <- cbind(rep(1, 14), rnorm(14)) y <- rbinom(14, prob = 0.5, size = 1) id <- rep(1:7, 2) glmmML.fit(x, y, cluster = id) # }
Run the code above in your browser using DataLab