Learn R Programming

BayesTree (version 0.3-1.3)

bart: Bayesian Additive Regression Trees

Description

BART is a Bayesian sum-of-trees model. For numeric response $y$, we have $y = f(x) + \epsilon$, where $\epsilon \sim N(0,\sigma^2)$. For a binary response $y$, $P(Y=1 | x) = F(f(x))$, where $F$ denotes the standard normal cdf (probit link).

In both cases, $f$ is the sum of many tree models. The goal is to have very flexible inference for the uknown function $f$.

In the spirit of ensemble models, each tree is constrained by a prior to be a weak learner so that it contributes a small amount to the overall fit.

Usage

bart(
   x.train, y.train, x.test=matrix(0.0,0,0),
   sigest=NA, sigdf=3, sigquant=.90,
   k=2.0,
   power=2.0, base=.95,
   binaryOffset=0,
   ntree=200,
   ndpost=1000, nskip=100,
   printevery=100, keepevery=1, keeptrainfits=TRUE,
   usequants=FALSE, numcut=100, printcutoffs=0,
   verbose=TRUE)
## S3 method for class 'bart':
plot(
   x,
   plquants=c(.05,.95), cols =c('blue','black'),
   ...)

Arguments

x.train
Explanatory variables for training (in sample) data. May be a matrix or a data frame, with (as usual) rows corresponding to observations and columns to variables. If a variable is a factor in a data frame, it is replaced with dummies. Note that q du
y.train
Dependent variable for training (in sample) data. If y is numeric a continous response model is fit (normal errors). If y is a factor (or just has values 0 and 1) then a binary response model with a probit link is fit.
x.test
Explanatory variables for test (out of sample) data. Should have same structure as x.train. bart will generate draws of $f(x)$ for each $x$ which is a row of x.test.
sigest
The prior for the error variance ($\sigma^2$) is inverted chi-squared (the standard conditionally conjugate prior). The prior is specified by choosing the degrees of freedom, a rough estimate of the corresponding standard deviation and a quantil
sigdf
Degrees of freedom for error variance prior. Not used if y is binary.
sigquant
The quantile of the prior that the rough estimate (see sigest) is placed at. The closer the quantile is to 1, the more aggresive the fit will be as you are putting more prior weight on error standard deviations ($\sigma$) less than the rough esti
k
For numeric y, k is the number of prior standard deviations $E(Y|x) = f(x)$ is away from +/-.5. The response (y.train) is internally scaled to range from -.5 to .5. For binary y, k is the number of prior standard deviations $f(x)$ is away from
power
Power parameter for tree prior.
base
Base parameter for tree prior.
binaryOffset
Used for binary $y$. The model is $P(Y=1 | x) = F(f(x) + binaryOffset)$. The idea is that $f$ is shrunk towards 0, so the offset allows you to shrink towards a probability other than .5.
ntree
The number of trees in the sum.
ndpost
The number of posterior draws after burn in, ndpost/keepevery will actually be returned.
nskip
Number of MCMC iterations to be treated as burn in.
printevery
As the MCMC runs, a message is printed every printevery draws.
keepevery
Every keepevery draw is kept to be returned to the user. A draw will consist of values of the error standard deviation ($\sigma$) and $f^*(x)$ at $x$ = rows from the train(optionally) and test data, where $f^*$ denotes the curren
keeptrainfits
If true the draws of $f(x)$ for $x$ = rows of x.train are returned.
usequants
Decision rules in the tree are of the form $x \le c$ vs. $x > c$ for each variable corresponding to a column of x.train. usequants determines how the set of possible c is determined. If usequants is true, then the c are a subset of the val
numcut
The number of possible values of c (see usequants). If a single number if given, this is used for all variables. Otherwise a vector with length equal to ncol(x.train) is required, where the $i^{th}$ element gives the number of c used for the $
printcutoffs
The number of cutoff rules c to printed to screen before the MCMC is run. Give a single integer, the same value will be used for all variables. If 0, nothing is printed.
verbose
Logical, if FALSE supress printing.
x
Value returned by bart which contains the information to be plotted.
plquants
In the plots, beliefs about $f(x)$ are indicated by plotting the posterior median and a lower and upper quantile. plquants is a double vector of length two giving the lower and upper quantiles.
cols
Vector of two colors. First color is used to plot the median of $f(x)$ and the second color is used to plot the lower and upper quantiles.
...
Additional arguments passed on to plot.

Value

  • The plot method sets mfrow to c(1,2) and makes two plots. The first plot is the sequence of kept draws of $\sigma$ including the burn-in draws. Initially these draws will decline as BART finds fit and then level off when the MCMC has burnt in. The second plot has $y$ on the horizontal axis and posterior intervals for the corresponding $f(x)$ on the vertical axis.

    bart returns a list assigned class bart. In the numeric $y$ case, the list has components:

  • yhat.trainA matrix with (ndpost/keepevery) rows and nrow(x.train) columns. Each row corresponds to a draw $f^*$ from the posterior of $f$ and each column corresponds to a row of x.train. The $(i,j)$ value is $f^*(x)$ for the $i^{th}$ kept draw of $f$ and the $j^{th}$ row of x.train. Burn-in is dropped.
  • yhat.testSame as yhat.train but now the x's are the rows of the test data.
  • yhat.train.meantrain data fits = mean of yhat.train columns.
  • yhat.test.meantest data fits = mean of yhat.test columns.
  • sigmapost burn in draws of sigma, length = ndpost/keepevery.
  • first.sigmaburn-in draws of sigma.
  • varcounta matrix with (ndpost/keepevery) rows and nrow(x.train) columns. Each row is for a draw. For each variable (corresponding to the columns), the total count of the number of times that variable is used in a tree decision rule (over all trees) is given.
  • sigestThe rough error standard deviation ($\sigma$) used in the prior.
  • yThe input dependent vector of values for the dependent variable. This is used in plot.bart.
  • In the binary $y$ case, the returned list has the components yhat.train, yhat.test, and varcount as above. In addition the list has a binaryOffset component giving the value used.

    Note that in the binary $y$, case yhat.train and yhat.test are $f(x)$ + binaryOffset. If you want draws of the probability $P(Y=1 | x)$ you need to apply the normal cdf (pnorm) to these values.

Details

BART is an Bayesian MCMC method. At each MCMC interation, we produce a draw from the joint posterior $(f,\sigma) | (x,y)$ in the numeric $y$ case and just $f$ in the binary $y$ case.

Thus, unlike a lot of other modelling methods in R, we do not produce a single model object from which fits and summaries may be extracted. The output consists of values $f^*(x)$ (and $\sigma^*$ in the numeric case) where * denotes a particular draw. The $x$ is either a row from the training data (x.train) or the test data (x.test).

References

Chipman, H., George, E., and McCulloch R. (2010) Bayesian Additive Regression Trees. The Annals of Applied Statistics, 4,1, 266-298.

Chipman, H., George, E., and McCulloch R. (2006) Bayesian Ensemble Learning. Advances in Neural Information Processing Systems 19, Scholkopf, Platt and Hoffman, Eds., MIT Press, Cambridge, MA, 265-272.

Friedman, J.H. (1991) Multivariate adaptive regression splines. The Annals of Statistics, 19, 1--67.

See Also

pdbart

Examples

Run this code
##simulate data (example from Friedman MARS paper)
f = function(x){
10*sin(pi*x[,1]*x[,2]) + 20*(x[,3]-.5)^2+10*x[,4]+5*x[,5]
}
sigma = 1.0  #y = f(x) + sigma*z , z~N(0,1)
n = 100      #number of observations
set.seed(99)
x=matrix(runif(n*10),n,10) #10 variables, only first 5 matter
Ey = f(x)
y=Ey+sigma*rnorm(n)
lmFit = lm(y~.,data.frame(x,y)) #compare lm fit to BART later
##run BART
set.seed(99)
bartFit = bart(x,y,ndpost=200) #default is ndpost=1000, this is to run example fast.
plot(bartFit) # plot bart fit
##compare BART fit to linear matter and truth = Ey
fitmat = cbind(y,Ey,lmFit$fitted,bartFit$yhat.train.mean)
colnames(fitmat) = c('y','Ey','lm','bart')
print(cor(fitmat))

Run the code above in your browser using DataLab