Learn R Programming

bayesm (version 3.0-2)

rivDP: Linear "IV" Model with DP Process Prior for Errors

Description

rivDP is a Gibbs Sampler for a linear structural equation with an arbitrary number of instruments. rivDP uses a mixture of normals for the structural and reduced form equation implemented with a Dirichlet Process Prior.

Usage

rivDP(Data, Prior, Mcmc)

Arguments

Data

list(z,w,x,y)

Prior

list(md,Ad,mbg,Abg,lambda,Prioralpha,lambda_hyper) (optional)

Mcmc

list(R,keep,nprint,maxuniq,SCALE,gridsize) (R required)

Value

a list containing:

deltadraw

R/keep x dim(delta) array of delta draws

betadraw

R/keep x 1 vector of beta draws

gammadraw

R/keep x dim(gamma) array of gamma draws

Istardraw

R/keep x 1 array of drawsi of the number of unique normal components

alphadraw

R/keep x 1 array of draws of Dirichlet Process tightness parameter

nmix

R/keep x list of draws for predictive distribution of errors

Details

Model: \(x=z'\delta + e1\). \(y=\beta*x + w'\gamma + e2\). \(e1,e2\) \(\sim\) \(N(\theta_{i})\). \(\theta_{i}\) represents \(\mu_{i},\Sigma_{i}\)

Note: Error terms have non-zero means. DO NOT include intercepts in the z or w matrices. This is different from rivGibbs which requires intercepts to be included explicitly.

Priors: \(\delta\) \(\sim\) \(N(md,Ad^{-1})\). \(vec(\beta,\gamma)\) \(\sim\) \(N(mbg,Abg^{-1})\)

\(\theta_{i}\) \(\sim\) \(G\)

\(G\) \(\sim\) \(DP(alpha,G_{0})\)

\(G_{0}\) is the natural conjugate prior for \((\mu,\Sigma)\): \(\Sigma\) \(\sim\) \(IW(nu,vI)\) and \(\mu|\Sigma\) \(\sim\) \(N(0,\Sigma (x) a^{-1})\) These parameters are collected together in the list \(\lambda\). It is highly recommended that you use the default settings for these hyper-parameters.

\(\lambda(a,nu,v):\)

\(a\) \(\sim\) uniform[alim[1],alimb[2]] \(nu\) \(\sim\) dim(data)-1 + exp(z) \(z\) \(\sim\) uniform[dim(data)-1+nulim[1],nulim[2]] \(v\) \(\sim\) uniform[vlim[1],vlim[2]]

\(alpha\) \(\sim\) \((1-(alpha-alpha_{min})/(alpha_{max}-alpha{min}))^{power}\) where \(alpha_{min}\) and \(alpha_{max}\) are set using the arguments in the reference below. It is highly recommended that you use the default values for the hyperparameters of the prior on alpha

List arguments contain:

Data:

  • z matrix of obs on instruments

  • y vector of obs on lhs var in structural equation

  • x "endogenous" var in structural eqn

  • w matrix of obs on "exogenous" vars in the structural eqn

Prior:

  • md prior mean of delta (def: 0)

  • Ad pds prior prec for prior on delta (def: .01I)

  • mbg prior mean vector for prior on beta,gamma (def: 0)

  • Abg pds prior prec for prior on beta,gamma (def: .01I)

Prioralpha:

  • Istarmin expected number of components at lower bound of support of alpha (def: 1)

  • Istarmax expected number of components at upper bound of support of alpha

  • power power parameter for alpha prior (def: .8)

lambda_hyper:

  • alim defines support of a distribution,def:c(.01,10)

  • nulim defines support of nu distribution, def:c(.01,3)

  • vlim defines support of v distribution, def:c(.1,4)

MCMC:

  • R number of MCMC draws

  • keep MCMC thinning parm: keep every keepth draw (def: 1)

  • nprint print the estimated time remaining for every nprint'th draw (def: 100)

  • maxuniq storage constraint on the number of unique components (def: 200)

  • SCALE scale data (def: TRUE)

  • gridsize gridsize parm for alpha draws (def: 20)

output includes object nmix of class "bayesm.nmix" which contains draws of predictive distribution of errors (a Bayesian analogue of a density estimate for the error terms). nmix:

  • probdraw not used

  • zdraw not used

  • compdraw list R/keep of draws from bivariate predictive for the errors

note: in compdraw list, there is only one component per draw

References

For further discussion, see "A Semi-Parametric Bayesian Approach to the Instrumental Variable Problem," by Conley, Hansen, McCulloch and Rossi, Journal of Econometrics (2008).

See Also

rivGibbs

Examples

Run this code
##
if(nchar(Sys.getenv("LONG_TEST")) != 0) {R=2000} else {R=10}

##
## simulate scaled log-normal errors and run
##
set.seed(66)
k=10
delta=1.5
Sigma=matrix(c(1,.6,.6,1),ncol=2)
N=1000
tbeta=4
set.seed(66)
scalefactor=.6
root=chol(scalefactor*Sigma)
mu=c(1,1)
##
## compute interquartile ranges
##
ninterq=qnorm(.75)-qnorm(.25)
error=matrix(rnorm(100000*2),ncol=2)<!-- %*%root -->
error=t(t(error)+mu)
Err=t(t(exp(error))-exp(mu+.5*scalefactor*diag(Sigma)))
lnNinterq=quantile(Err[,1],prob=.75)-quantile(Err[,1],prob=.25)
##
## simulate data
##
error=matrix(rnorm(N*2),ncol=2)%*%root
error=t(t(error)+mu)
Err=t(t(exp(error))-exp(mu+.5*scalefactor*diag(Sigma)))
#
# scale appropriately  
Err[,1]=Err[,1]*ninterq/lnNinterq
Err[,2]=Err[,2]*ninterq/lnNinterq
z=matrix(runif(k*N),ncol=k)
x=z%*%(delta*c(rep(1,k)))+Err[,1]
y=x*tbeta+Err[,2]

# set intial values for MCMC
Data = list(); Mcmc=list()
Data$z = z;  Data$x=x; Data$y=y

# start MCMC and keep results
Mcmc$maxuniq=100
Mcmc$R=R
end=Mcmc$R
begin=100

out=rivDP(Data=Data,Mcmc=Mcmc)

cat("Summary of Beta draws",fill=TRUE)
summary(out$betadraw,tvalues=tbeta)

if(0){
## plotting examples
plot(out$betadraw,tvalues=tbeta)
plot(out$nmix)  ## plot "fitted" density of the errors
##

}

Run the code above in your browser using DataLab