Acts on a gp, gpvec, dgp2, dgp2vec,
dgp3, or dgp3vec object.
Calculates posterior mean and variance/covariance over specified input
locations. Optionally calculates expected improvement (EI) or entropy
over candidate inputs. Optionally utilizes SNOW parallelization.
# S3 method for gp
predict(
object,
x_new,
lite = TRUE,
grad = FALSE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)# S3 method for dgp2
predict(
object,
x_new,
lite = TRUE,
grad = FALSE,
store_latent = FALSE,
mean_map = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
# S3 method for dgp3
predict(
object,
x_new,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
# S3 method for gpvec
predict(
object,
x_new,
m = NULL,
ord_new = NULL,
lite = TRUE,
grad = FALSE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
# S3 method for dgp2vec
predict(
object,
x_new,
m = NULL,
ord_new = NULL,
lite = TRUE,
grad = FALSE,
store_latent = FALSE,
mean_map = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
# S3 method for dgp3vec
predict(
object,
x_new,
m = NULL,
ord_new = NULL,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
object of the same class with the following additional elements:
x_new: copy of predictive input locations
mean: predicted posterior mean, indices correspond to
x_new locations
s2: predicted point-wise variances, indices correspond to
x_new locations (only returned when lite = TRUE)
mean_all: predicted posterior mean for each sample (rows correspond
to iterations), only returned when return_all = TRUE
s2_all: predicted point-wise variances for each sample (rows correspond
to iterations), only returned when return_all = TRUE
Sigma: predicted posterior covariance, indices correspond to
x_new locations (only returned when lite = FALSE)
grad_mean: predicted posterior mean of the gradient (rows correspond
to x_new, columns correspond to dimension, only returned when
grad = TRUE)
grad_s2: predicted point-wise variances of the gradient (rows correspond
to x_new, columns correspond to dimension, only returned when
grad = TRUE)
EI: vector of expected improvement values, indices correspond
to x_new locations (only returned when EI = TRUE)
entropy: vector of entropy values, indices correspond to
x_new locations (only returned when entropy_limit is
numeric)
w_new: array of hidden layer mappings, with dimensions corresponding
to iteration, then x_new location, then dimension (only returned when
store_latent = TRUE)
z_new: array of hidden layer mappings, with dimensions corresponding
to iteration, then x_new location, then dimension (only returned when
store_latent = TRUE)
Computation time is added to the computation time of the existing object.
object from fit_one_layer, fit_two_layer, or
fit_three_layer with burn-in already removed
vector or matrix of predictive input locations
logical indicating whether to calculate only point-wise
variances (lite = TRUE) or full covariance
(lite = FALSE)
logical indicating whether to additionally calculate/return predictions of the gradient (one and two layer models only)
logical indicating whether to return mean and point-wise
variance prediction for ALL samples (only available for lite = TRUE)
logical indicating whether to calculate expected improvement (for minimizing the response)
optional limit state for entropy calculations (separating
passes and failures), default value of NULL bypasses entropy
calculations
number of cores to utilize for SNOW parallelization
N/A
logical indicating whether to store and return mapped values of latent layers (two or three layer models only)
logical indicating whether to map hidden layers using
conditional mean (mean_map = TRUE) or using a random sample
from the full MVN distribution (two or three layer models only)
size of Vecchia conditioning sets, defaults to the lower of twice the
m used for MCMC or the maximum available (only for fits with
vecchia = TRUE),
optional ordering for Vecchia approximation with lite = FALSE,
must correspond to rows of x_new, defaults to random, is
applied to all layers in deeper models
All iterations in the object are used for prediction, so samples
should be burned-in. Thinning the samples using trim will speed
up computation. Posterior moments are calculated using conditional
expectation and variance. As a default, only point-wise variance is
calculated. Full covariance may be calculated using lite = FALSE.
Expected improvement is calculated with the goal of minimizing the response. See Chapter 7 of Gramacy (2020) for details. Entropy is calculated based on two classes separated by the specified limit. See Sauer (2023, Chapter 3) for details.
SNOW parallelization reduces computation time but requires more memory storage.
Sauer, A. (2023). Deep Gaussian process surrogates for computer experiments.
*Ph.D. Dissertation, Department of Statistics, Virginia Polytechnic Institute and State University.*
http://hdl.handle.net/10919/114845
Booth, A. S. (2025). Deep Gaussian processes with gradients. arXiv:2512.18066
Sauer, A., Gramacy, R.B., & Higdon, D. (2023). Active learning for deep
Gaussian process surrogates. *Technometrics, 65,* 4-18. arXiv:2012.08015
Sauer, A., Cooper, A., & Gramacy, R. B. (2023). Vecchia-approximated deep Gaussian
processes for computer experiments.
*Journal of Computational and Graphical Statistics, 32*(3), 824-837. arXiv:2204.02904
Gramacy, R. B., Sauer, A. & Wycoff, N. (2022). Triangulation candidates for Bayesian
optimization. *Advances in Neural Information Processing Systems (NeurIPS), 35,*
35933-35945. arXiv:2112.07457
Booth, A., Renganathan, S. A. & Gramacy, R. B. (2025). Contour location for
reliability in airfoil simulation experiments using deep Gaussian
processes. *Annals of Applied Statistics, 19*(1), 191-211. arXiv:2308.04420
Barnett, S., Beesley, L. J., Booth, A. S., Gramacy, R. B., & Osthus D. (2025).
Monotonic warpings for additive and deep Gaussian processes.
*Statistics and Computing, 35*(3), 65. arXiv:2408.01540
# See ?fit_one_layer, ?fit_two_layer, or ?fit_three_layer
# for examples
Run the code above in your browser using DataLab