NMixMCMC(y0, y1, censor, scale, prior,
init, init2, RJMCMC,
nMCMC=c(burn=10, keep=10, thin=1, info=10),
PED, keep.chains=TRUE, onlyInit=FALSE, dens.zero=1e-300)## S3 method for class 'NMixMCMC':
print(x, \dots)
## S3 method for class 'NMixMCMClist':
print(x, \dots)
NA[object Object],[object Object],[object Object],[object Object]
If it is not supplied then it is assumed that all values are exactly
[object Object],[object Object]
If there is no censoring, and argument scale is missing
then the data are scaled to have zero mean
init2 has the same structure as the list
init. All initials in init2 can PED is set to TRUE
for models FALSE, only summary statistics
are returned in the resulting object. This might be useful in the
model searching step to save some memory.TRUE then the function only
determines parameters of the prior distribution, initial values,
values of scale and
parameters for the reversible jump MCMC.NMixMCMC or NMixMCMClist to be printed.print method.NMixMCMC or class NMixMCMClist.
Object of class NMixMCMC is returned if PED is
FALSE. Object of class NMixMCMClist is returned if
PED is TRUE. Objects of class NMixMCMC have the following components:
nMCMC.prior.init.RJMCMC.scale.y, K, w, mu, Li, Q, Sigma,
gammaInv, r containing the last sampled values of
generic parameters.data.frame having columns labeled
DIC, pD, D.bar, D.in.bar containing
values used to compute deviance information criterion
(DIC). Currently only $DIC_3$ of Celeux et al. (2006) is
implemented.data.frame which summarizes the acceptance
probabilities of different move types of the sampler.data.frame with columns labeled
y.Mean.*, y.SD.*, y.Corr.*.*,
z.Mean.*, z.SD.*, z.Corr.*.* containing the
chains for the means, standard deviations and correlations of the
distribution of the original (y) and scaled (z) data
based on a normal mixture at each iteration.data.frame with columns labeles
LogL0, LogL1, dev.complete, dev.observed
containing the chains of quantities needed to compute DIC.data.frame with $p$ columns with posterior
means for (latent) values of observed data (useful when there is
censoring).data.frame with $p$ columns with posterior
means for (latent) values of scaled observed data (useful when there is censoring).data.frame with columns labeled
LogL0, LogL1, dev.complete,
dev.observed, pred.dens containing posterior means of
individual contributions to the deviance.Note that when there is censoring, this is not exactly the predictive density as it is computed as the average of densities at each iteration evaluated at sampled values of latent observations at iterations.
y.Mean.* columns of the data.frame mixture.y.SD.* and y.Corr.*.* columns of the
data.frame mixture.z.Mean.* columns of the data.frame mixture.z.SD.* and z.Corr.*.* columns of the data.frame mixture.Komarek-NMix.pdf) available in the
vignette section of the package. In the rest of the helpfile,
the same notation is used as in the paper, namely, $n$ denotes the number of
observations, $p$ is dimension of the data, $K$ is the number
of mixture components,
$w_1,\dots,w_K$ are mixture weights,
$\boldsymbol{\mu}_1,\dots,\boldsymbol{\mu}_K$
are mixture means,
$\boldsymbol{\Sigma}_1,\dots,\boldsymbol{\Sigma}_K$
are mixture variance-covariance matrices,
$\boldsymbol{Q}_1,\dots,\boldsymbol{Q}_K$ are
their inverses.For the data $\boldsymbol{y}_1,\dots,\boldsymbol{y}_n$ the following $g_y(\boldsymbol{y})$ density is assumed $$g_y(\boldsymbol{y}) = |\boldsymbol{S}|^{-1} \sum_{j=1}^K w_j \varphi\bigl(\boldsymbol{S}^{-1}(\boldsymbol{y} - \boldsymbol{m}\,|\,\boldsymbol{\mu}_j,\,\boldsymbol{\Sigma}_j)\bigr),$$ where $\varphi(\cdot\,|\,\boldsymbol{\mu},\,\boldsymbol{\Sigma})$ denotes a density of the (multivariate) normal distribution with mean $\boldsymbol{\mu}$ and a~variance-covariance matrix $\boldsymbol{\Sigma}$. Finally, $\boldsymbol{S}$ is a pre-specified diagonal scale matrix and $\boldsymbol{m}$ is a pre-specified shift vector. Sometimes, by setting $\boldsymbol{m}$ to sample means of components of $\boldsymbol{y}$ and diagonal of $\boldsymbol{S}$ to sample standard deviations of $\boldsymbol{y}$ (considerable) improvement of the MCMC algorithm is achieved.
Diebolt, J. and Robert, C. P. (1994). Estimation of finite mixture distributions through Bayesian sampling. Journal of the Royal Statistical Society, Series B, 56, 363--375.
$\mbox{Kom\'{a}rek, A.}$ A new R package for Bayesian estimation of multivariate normal mixtures allowing for selection of the number of components and interval-censored data. Computational Statistics and Data Analysis. To appear.
Plummer, M. (2008). Penalized loss functions for Bayesian model comparison. Biostatistics, 9, 523-539. Richardson, S. and Green, P. J. (1997). On Bayesian analysis of mixtures with unknown number of components (with Discussion). Journal of the Royal Statistical Society, Series B, 59, 731-792.
Spiegelhalter, D. J.,Best, N. G., Carlin, B. P., and van der Linde, A. (2002). Bayesian measures of model complexity and fit (with Discussion). Journal of the Royal Statistical Society, Series B, 64, 583-639.
NMixPredDensMarg, NMixPredDensJoint2.## See additional material available in
## YOUR_R_DIR/library/mixAK/doc/
## or YOUR_R_DIR/site-library/mixAK/doc/
## - files Galaxy.pdf, Faithful.pdf, Tandmob.pdfRun the code above in your browser using DataLab