GLMM_MCMC(y, dist="gaussian", id, x, z, random.intercept,
          prior.alpha, init.alpha, init2.alpha,                      
          scale.b,     prior.b,    init.b,      init2.b,
          prior.eps,   init.eps,   init2.eps,
          nMCMC=c(burn=10, keep=10, thin=1, info=10),
          tuneMCMC=list(alpha=1, b=1),
          store=c(b=FALSE), PED=TRUE, keep.chains=TRUE,
          dens.zero=1e-300, parallel=FALSE)## S3 method for class 'GLMM_MCMC':
print(x, \dots)
## S3 method for class 'GLMM_MCMClist':
print(x, \dots)
y is vector then
     there is only one response in the model. If y is matrix or data frame then
     each column gives values of one response. Missing values are allowed.If
random.intercept.
prior.b can have the components listed belowlmer functions)
lmer functions)
    and does not have to be given by the uprior.eps can
     have the components listed below. For all components, a sensible
     value leading to weakly informative prior distribution can init.eps can have the components listed below. For all
    components, a sensible value can be dFALSE, only summary statistics
    are returned in the resulting object. This might be useful in the
    model searching step to save some memory.snow and
    snowfall-a-package) should be used when
    running two chains for the purposprint method.GLMM_MCMC. It can have the following
  components (some of them may be missing according to the context
  of the model):nMCMC.dist argument.prior.alpha.prior.b.prior.eps.init.alpha.init.b.init.eps.init.alpha to restart MCMC.b, K, w, mu, Sigma, Li, Q,
    gammaInv, r.b, K, w, mu, Sigma, Li, Q,
    gammaInv, r. It can be used as argument
    init.b to restart MCMC.sigma, gammaInv.sigma, gammaInv. It can be used as argument
    init.eps to restart MCMC.scale.b.data.frame with posterior summary
    statistics for the deviance (approximated using the Laplacian
    approximation) and conditional (given random effects) devience.data.frame with posterior summary statistics for fixed effects.poster.comp.prob1 is a matrix with $K$ columns and $I$
    rows ($I$ is the number of subjects defining the longitudinal
    profiles or correlated observations) with estimated posterior component probabilities
    -- posterior means of the components of the underlying 0/1
    allocation vector.WARNING: By default, the labels of components are based on artificial identifiability constraints based on ordering of the mixture means in the first margin. Very often, such identifiability constraint is not satisfactory!
poster.comp.prob2 is a matrix with $K$ columns and $I$
    rows ($I$ is the number of subjects defining the longitudinal
    profiles or correlated observations)
    with estimated posterior component probabilities
    -- posterior mean over model parameters including random effects.WARNING: By default, the labels of components are based on artificial identifiability constraints based on ordering of the mixture means in the first margin. Very often, such identifiability constraint is not satisfactory!
data.frames, one
    data.frame per response profile. Each data.frame
    with columns labeled id, observed,
    fitted, stres,
    eta.fixed and eta.random holding
    identifier for clusters of grouped observations,
    observed values and
    posterior means for fitted values (response expectation given fixed and random effects),
    standardized residuals (derived from fitted values),
    fixed effect part of the linear predictor and the random effect part of
    the linear predictor. In each column, there are first all values for
    the first response, then all values for the second response etc.data.frame with columns labeled
    b1, ..., bq, Logpb, Cond.Deviance, Deviance with
    posterior means of random effects for each cluster, posterior
    means of $\log\bigl{p(\boldsymbol{b})\bigr}$,
    conditional deviances, i.e., minus twice the conditional (given
    random effects) log-likelihood for each cluster
    and GLMM deviances, i.e., minus twice the marginal (random effects
    integrated out) log-likelihoods for each cluster. The value of the
    marginal (random effects integrated out) log-likelihood at each MCMC
    iteration is obtained using the Laplacian approximation.It is a matrix with $K_b$ columns when $K_b$ is fixed. Otherwise it is a vector with orders put sequentially after each other.
It is a matrix with $K_b$ columns when $K_b$ is fixed. Otherwise it is a vector with ranks put sequentially after each other.
data.frame with columns labeled
    b.Mean.*, b.SD.*, b.Corr.*.*
    containing the chains for the means, standard deviations and correlations of the
    distribution of the random effects based on a normal mixture at each
    iteration.store[b] is TRUE.order_b, rank_b, poster.comp.prob1,
    poster.comp.prob2, poster.mean.w_b,
    poster.mean.mu_b, poster.mean.Q_b,
    poster.mean.Sigma_b, poster.mean.Li_b.Komárek, A., Hansen, B. E., Kuiper, E. M. M., van Buuren, H. R., and Lesaffre, E. (2010). Discriminant analysis using a multivariate linear mixed model with a normal mixture in the random effects distribution. Statistics in Medicine, 29, 3267-3283.
Plummer, M. (2008). Penalized loss functions for Bayesian model comparison. Biostatistics, 9, 523-539.
NMixMCMC.## See also additional material available in 
## YOUR_R_DIR/library/mixAK/doc/
## or YOUR_R_DIR/site-library/mixAK/doc/
## - files PBCseq.pdf,
##         PBCseq.R
## ==============================================Run the code above in your browser using DataLab