bbnam(dat, model="actor", ...)
bbnam.fixed(dat, nprior=matrix(rep(0.5,dim(dat)[2]^2),
nrow=dim(dat)[2],ncol=dim(dat)[2]), em=0.25, ep=0.25, diag=FALSE,
mode="digraph", draws=1500, outmode="draws", anames=paste("a",
1:dim(dat)[2],sep=""), onames=paste("o",1:dim(dat)[1], sep=""))
bbnam.pooled(dat, nprior=matrix(rep(0.5,dim(dat)[2]*dim(dat)[3]),
nrow=dim(dat)[2],ncol=dim(dat)[3]), emprior=c(1,1),
epprior=c(1,1), diag=FALSE, mode="digraph", reps=5, draws=1500,
burntime=500, quiet=TRUE, anames=paste("a",1:dim(dat)[2],sep=""),
onames=paste("o",1:dim(dat)[1],sep=""), compute.sqrtrhat=TRUE)
bbnam.actor(dat, nprior=matrix(rep(0.5,dim(dat)[2]*dim(dat)[3]),
nrow=dim(dat)[2],ncol=dim(dat)[3]),
emprior=cbind(rep(1,dim(dat)[1]),rep(1,dim(dat)[1])),
epprior=cbind(rep(1,dim(dat)[1]),rep(1,dim(dat)[1])), diag=FALSE,
mode="digraph", reps=5, draws=1500, burntime=500, quiet=TRUE,
anames=paste("a",1:dim(dat)[2],sep=""),
onames=paste("o",1:dim(dat)[1],sep=""), compute.sqrtrhat=TRUE)
nprior[i,j]
gives the prior probability of i sending the relation to j in the criterion graph.) If no network By default, the bbnam routine returns (approximately) independent draws from the joint posterior distribution, each draw yielding one realization of the criterion network and one collection of accuracy parameters (i.e., probabilities of false positives/negatives). This is accomplished via a Gibbs sampler in the case of the pooled/actor model, and by direct sampling for the fixed probability model. In the special case of the fixed probability model, it is also possible to obtain directly the posterior for the criterion graph (expressed as a matrix of Bernoulli parameters); this can be controlled by the outmode
parameter.
As noted, the taking of posterior draws in the nontrivial case is accomplished via a Markov Chain Monte Carlo method, in particular the Gibbs sampler; the high dimensionality of the problem ($O(n^2+2n)$) tends to preclude more direct approaches. At present, chain burn-in is determined ex ante on a more or less arbitrary basis by specification of the burntime parameter. Eventually, a more systematic approach will be utilized. Note that insufficient burn-in will result in inaccurate posterior sampling, so it's not wise to skimp on burn time where otherwise possible. Similarly, it is wise to employ more than one Markov Chain (set by reps), since it is possible for trajectories to become ``trapped'' in metastable regions of the state space. Number of draws per chain being equal, more replications are usually better than few; consult Gelman et al. for details. A useful measure of chain convergence, Gelman and Rubin's potential scale reduction ($\sqrt{\hat{R}}$), can be computed using the compute.sqrtrhat
parameter. The potential scale reduction measure is an ANOVA-like comparison of within-chain versus between-chain variance; it approaches 1 (from above) as the chain converges, and longer burn-in times are strongly recommended for chains with scale reductions in excess of 1.1 or thereabouts.
Finally, a cautionary concerning prior distributions: it is important that the specified priors actually reflect the prior knowledge of the researcher; otherwise, the posterior will be inadequately informed. In particular, note that an uninformative prior on the accuracy probabilities implies that it is a priori equally probable that any given actor's observations will be informative or negatively informative (i.e., that i observing j sending a tie to k reduces p(j->k)). This is a highly unrealistic assumption, and it will tend to produce posteriors which are bimodal (one mode being related to the ``informative'' solution, the other to the ``negatively informative'' solution). A more plausible but still fairly diffuse prior would be Beta(3,5), which reduces the prior probability of an actor's being negatively informative to 0.16, and the prior probability of any given actor's being more than 50% likely to make a particular error (on average) to around 0.22. (This prior also puts substantial mass near the 0.5 point, which would seem consonant with the BKS studies.) Butts(1999) discusses a number of issues related to choice of priors for the bbnam, and users should consult this reference if matters are unclear before defaulting to the uninformative solution.
Gelman, A.; Carlin, J.B.; Stern, H.S.; and Rubin, D.B. (1995). Bayesian Data Analysis. London: Chapman and Hall.
Gelman, A., and Rubin, D.B. (1992). ``Inference from Iterative Simulation Using Multiple Sequences.'' Statistical Science, 7, 457-511.
Krackhardt, D. (1987). ``Cognitive Social Structures.'' Social Networks, 9, 109-134.
npostpred
, event2dichot, bbnam.bf
#Define a network prior pnet<-matrix(ncol=5,nrow=5) pnet[,]<-0.5 #Define em and ep priors pem<-matrix(nrow=5,ncol=2) pem[,1]<-3 pem[,2]<-5 pep<-matrix(nrow=5,ncol=2) pep[,1]<-3 pep[,2]<-5
#Draw from the posterior b<-bbnam(dat,model="actor",nprior=pnet,emprior=pem,epprior=pep, burntime=100,draws=100) #Print a summary of the posterior draws summary(b)