Learn R Programming

plgp (version 1.0)

PL: Particle Learning Skeleton Method

Description

Implements the Particle Learning sequential Monte Carlo algorithm on the data sequence provided, using re-sample and propagate steps

Usage

PL(data, start, end, init, lpredprob, propagate, prior = NULL,
   addpall = NULL, params = NULL, save = NULL, P = 100,
   progress = 10, cont = FALSE, verb = 1)

Arguments

data
function generating the data; for examples see data.GP
start
a scalar integer specifying the starting time; the data entry/sample where PL will start
end
a scalar integer specifying the ending time; the data entry/sample where PL will stop
init
function used to initialize the particles at the start of PL; for examples see draw.GP
lpredprob
function used to calculate the predictive probability of an observation (usually the next one in time) given a particle. This is the primary function used in the PL re-sample step; for examples see
propagate
function used to propagate particles given an observation (usually the next one in time); for examples see propagate.GP
prior
function used to generate prior parameters that may be passed into the data, init, lpredprob and propagate functions as needed; for examples see prior.G
addpall
an optional function that adds the new observation (usually the next one in time) to the global pall variable which stores the sufficient information shared by all particles; for examples see
params
an optional function called each progress rounds that collects parameters from the particles for summary and visualization; for examples see params.GP
save
an option function that is called every round to save some information about the particles
P
number of particles to use
progress
number of PL rounds after which to collect params and draws histograms; a non-positive value or params = NULL skips the progress meter
cont
if TRUE then PL will try to use the existing set of particles to continue where it left off; start and end should be specified appropriately when continuing
verb
if nonzero, then screen prints will indicate the proportion of PL updates finished so far; verb = 1 will cause PL to pause on progress drawings for inspection

Value

  • PL only returns the peach global variable, containing sufficient information particular to each (of the P) particles

Details

Uses the PL SMC algorithm via the functions provided. This function is just a skeleton framework. The hard work is in specifying the arguments/functions which execute the calculations needed in the re-sample and propagate steps.

PL and uses the global variables pall, containing sufficient information common to all particles, peach, containing sufficient information particular to each of the P particles, and psave containing any saved information.

Note that PL is designed to be fast for sequential updating (of GPs) when new data arrive. This facilitates efficient sequential design of experiments by active learning techniques, e.g., optimization by expected improvement and sequential exploration of classification label boundaries by the predictive entropy. PL is not optimized for static inference when all of the data arrive at once, in batch

References

Carvalho, C., Johannes, M., Lopes, H., and Polson, N. (2008). Particle Learning and Smoothing. Discussion Paper 2008-32, Duke University Dept. of Statistical Science. Gramacy, R. and Polson, N. (2010). Particle learning of Gaussian process models for sequential design and optimization. Tech. Rep. arXiv:0909.5262, University of Cambridge. Gramacy, R. and Lee, H. (2010). Optimization under unknown constraints. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press http://www.statslab.cam.ac.uk/~bobby/plgp.html

See Also

papply, draw.GP, data.GP, lpredprob.GP, propagate.GP, params.GP, pred.GP

Examples

Run this code
## See the demos via demo(package="plgp"); it is important to
## run them with the ask=FALSE argument so that the
## automatically generated plots may refresh automatically
## (without requiring the user to press RETURN)
## Illustrates regression GPs on a simple 1-d sinusoidal
## data generating mechanism
demo("plgp_sin1d", ask=FALSE)

## Illustrates classification GPs on a simple 2-d exponential
## data generating mechanism
demo("plcgp_exp", ask=FALSE)

## Illustrates classification GPs on Ripley's Cushings data
demo("plcgp_cush", ask=FALSE)

## Illustrates active learning via the expected improvement
## statistic on a simple 1-d data generating mechanism
demo("plgp_exp_ei", ask=FALSE)

## Illustrates active learning via entropy with classification
## GPs on a simple 2-d exponential data generating mechanism
demo("plcgp_exp_entropy", ask=FALSE)

## Illustrates active learning via the integrated expected
## conditional improvement statistic for optimization
## under known constraints on a simple 1-d data generating
## mechanism
demo("plgp_1d_ieci", ask=FALSE)

## Illustrates active learning via the integrated expected
## conditional improvement statistic for optimization under
## unknown constraints on a simple 1-d data generating
## mechanism
demo("plconstgp_1d_ieci", ask=FALSE)

## Illustrates active learning via the integrated expected
## conditional improvement statistic for optimization under
## unknokn constraints on a simple 2-d data generating
## mechanism
demo("plconstgp_2d_ieci", ask=FALSE)

Run the code above in your browser using DataLab