Learn R Programming

DPpackage (version 1.1-0)

HDPMcdensity: Bayesian analysis for a hierarchical Dirichlet Process mixture of normals model for conditional density estimation

Description

This function generates a posterior density sample for a DP mixture of normals model for related random probability measures. Support provided by the NIH/NCI R01CA75981 grant.

Usage

HDPMcdensity(formula,study,xpred,ngrid=100,prior,mcmc,state,status,data=sys.frame(sys.parent()),
             na.action=na.fail,work.dir=NULL)

Arguments

formula
a two-sided linear formula object describing the model fit, with the response on the left of a ~ operator and the terms, separated by + operators, on the right.
study
a (1 by nrec) vector of study indicators. The i-th index is the study i that response j belongs to.
ngrid
integer giving the number of grid points where the density estimate is evaluated. The default is 100.
xpred
a matrix giving the covariate values where the predictive density is evaluated.
prior
a list giving the prior information. The list includes the following parameters: pe1 and pe0 giving the prior weights for the point mass at $\epsilon=1$ and at $\epsilon=1$, respectivel
mcmc
a list giving the MCMC parameters. The list must include the following integers: nburn giving the number of burn-in scans, nskip giving the thinning interval, nsave giving
state
a list giving the current value of the parameters. This list is used if the current analysis is the continuation of a previous analysis (not available yet).
status
a logical variable indicating whether this run is new (TRUE) or the continuation of a previous analysis (FALSE). In the latter case the current value of the parameters must be specifie
data
data frame.
na.action
a function that indicates what should happen when the data contain NAs. The default action (na.fail) causes HDPdensity to print an error message and terminate if there are
work.dir
working directory.

Value

  • An object of class HDPMcdensity representing the hierarchical DPM of normals model. Generic functions such as print, plot, and summary have methods to show the results of the fit. The results include sigma, eps, the vector of precision parameters alpha, mub and sigmab. The list state in the output object contains the current value of the parameters necessary to restart the analysis. If you want to specify different starting values to run multiple chains set status=TRUE and create the list state based on this starting values. In this case the list state must include the following objects:
  • nclusteran integer giving the number of clusters.
  • ssan interger vector defining to which of the ncluster clusters each subject belongs.
  • scan integer vector defining to which DP each cluster belongs. Note that length(sc)=nrec (only the first ncluster elements are considered to start the chain.
  • alphagiving the vector of dimension nsuties+1 of precision parameters.
  • muclusa matrix of dimension (number of subject + 100) times the total number of variables (responses + predictors), giving the means for each cluster (only the first ncluster rows are considered to start the chain).
  • mubgiving the mean of the normal baseline distributions.
  • sigmabgiving the covariance matrix the normal baseline distributions.
  • sigmagiving the normal kernel covariance matrix.
  • epsgiving the value of eps.

Details

This generic function fits a hierarchical mixture of DPM of normals model for conditional density estimation (Mueller, Quintana and Rosner, 2004). Let $d_i=(y_i,x_i)$ be the vector of full data, including the responses $y_i$ and predictors $x_i$. The HDPMcdensity function fits the hierarchical mixture of DPM of normals model to the full data and then look at the implied conditional distribution of the responses given the predictors. The model is given by: $$d_{ij} | F_i \sim F_i$$ where, $d_{ij}$ denote the j-th observation in the i-th study, i=1,...,I, and $F_i$ is assumed to arise as a mixture $F_i = \epsilon H_0 + (1-\epsilon) H_i$ of one common distribution $H_0$ and a distribution $H_i$ that is specific or idiosyncratic to the i-th study.

The random probability measures $H_i$ in turn are given a Dirichlet process mixture of normal prior. We assume $$H_i(d) = \int N(\mu,\Sigma) d G_i(\mu),~ i=0,1,\ldots,I$$ with $$G_i | \alpha_i, G_0 \sim DP(\alpha G_0)$$ where, the baseline distribution is $$G_0 = N(\mu| \mu_b,\Sigma_b)$$. To complete the model specification, independent hyperpriors are assumed (optional), $$\Sigma | \nu, T \sim IW(\nu,T)$$ $$\alpha_i | a_{0i}, b_{0i} \sim Gamma(a_{0i},b_{0i})$$ $$\mu_b | m_0, S_0 \sim N(m_0,S_0)$$ $$\Sigma_b | \nu_b, Tb \sim IW(\nu_b,Tb)$$ and $$p(\epsilon) = \pi_0 \delta_0+ \pi_1 \delta_1+(1- \pi_0 -\ pi_1) Be(a_\epsilon,b_\epsilon)$$

Note that the inverted-Wishart prior is parametrized such that if $A \sim IW_q(\nu, \psi)$ then $E(A)= \psi^{-1}/(\nu-q-1)$.

References

Mueller, P., Quintana, F. and Rosner, G. (2004). A Method for Combining Inference over Related Nonparametric Bayesian Models. Journal of the Royal Statistical Society, Series B, 66: 735-749.

See Also

predict.HDPMcdensity

Examples

Run this code
# Data
      data(calgb)
      attach(calgb)
      y <- cbind(Z1,Z2,Z3,T1,T2,B0,B1)
      x <- cbind(CTX,GM,AMOF)
  
      z <- cbind(y,x)

    #  Data for prediction
      data(calgb.pred)
      xpred <- as.matrix(calgb.pred[,8:10])


    # Prior information
      prior <- list(pe1=0.1,
                    pe0=0.1,
                    ae=1,
                    be=1,
                    a0=rep(1,3),
                    b0=rep(1,3),
                    nu=12,
                    tinv=0.25*var(z),
 		  m0=apply(z,2,mean),
                    S0=var(z),
 		  nub=12,
                    tbinv=var(z))		


    # Initial state
      state <- NULL

    # MCMC parameters

      mcmc <- list(nburn=5000,
                   nsave=5000,
                   nskip=3,
                   ndisplay=100)

    # Fitting the model
      fit1 <- HDPMcdensity(formula=y~x,
                          study=~study,
                          xpred=xpred,
                          prior=prior,
                          mcmc=mcmc,
                          state=state,
                          status=TRUE)

    # Posterior inference
      fit1
      summary(fit1)
       
    # Plot the parameters
    # (to see the plots gradually set ask=TRUE)
      plot(fit1,ask=FALSE)

    # Plot the a specific parameters 
    # (to see the plots gradually set ask=TRUE)
      plot(fit1,ask=FALSE,param="eps",nfigr=1,nfigc=2)

    # Plot the measure for each study 
    # under first values for the predictors, xpred[1,]
      predict(fit1,pred=1,i=1,r=1) # pred1, study 1
      predict(fit1,pred=1,i=2,r=1) # pred1, study 2

    # Plot the measure for each study 
    # under second values for the predictors, xpred[2,]
      predict(fit1,pred=2,i=1,r=1) # pred2, study 1
      predict(fit1,pred=2,i=2,r=1) # pred2, study 2

    # Plot the idiosyncratic measure for each study
    # under first values for the predictors, xpred[1,]
      predict(fit1,pred=1,i=1,r=0) # study 1
      predict(fit1,pred=1,i=2,r=0) # study 2

    # Plot the common measure
    # under first values for the predictors, xpred[1,]
      predict(fit1,pred=1,i=0)

Run the code above in your browser using DataLab