Learn R Programming

GPpenalty (version 1.0.1)

GPpenalty-package: GPpenalty

Description

Implements maximum likelihood estimation for Gaussian processes, supporting both isotropic and separable models with predictive capabilities. Includes penalized likelihood estimation, with cross-validation guided by decorrelated prediction error (DPE) metric. DPE metric, motivated by Mahalanobis distance, serves as evaluation criteria that accounts for predictive uncertainty in tuning parameter selection. Designed specifically for small datasets.

Arguments

Functions

  • mle_gp: The function computes maximum likelihood estimates for the lengthscale, scale, mu, and nugget (g) parameters using optim, with options to fix or assume zero for certain parameters.

  • predict_gp: Computes the posterior mean and covariance matrix for a given set of input locations based on a fitted model.

  • gp_cv: Performs cross-validation to select an optimal tuning parameter for penalized MLE of the lengthscale parameter in Gaussian processes.

  • mle_penalty: Computes penalized maximum likelihood estimates for the lengthscale parameter using optim.

  • score: Calculates a score value. Higher score values indicate better fits.

  • dpe: Calculates a decorrelated prediction error value. Lower dpe values indicate better fits.

  • kernel: Compute the squared exponential kernel defined as \(k = \exp(-\theta (x - x')^2) + g\) , where \(\theta\) is the lengthscale parameter and \(g\) is a jitter term. Both isotropic and separable kernels are supported.

Examples

Run this code
# \donttest{
#### define function ###
f_x <- function(x) {
return(sin(2*pi*x) + x^2)
}

### x and y ###
x <- runif(8, min=0, max=1)
y <- f_x(x)
x.test <- runif(100, min=0, max=1)
y.test <- f_x(x.test)

### no penalization ###
# fit
fit <- mle_gp(y, x)
# prediction
pred <- predict_gp(fit, x.test)


# obtain kernel function
cov_function <- kernel(x1=x, theta=fit$theta)


# evaluate the predictive performance with score
score_value <- score(y.test, pred$mup, pred$Sigmap)

### penalization ###
# leave-one-out cross validation
loocv.lambda <- gp_cv(y, x)
# fit
fit.loocv <- mle_penalty(loocv.lambda)
# prediction
pred.loocv <- predict_gp(fit.loocv, x.test)

# k-fold cross validation with the dpe metric
kfold.dpe <- gp_cv(y, x, k=4)
# fit
fit.kfold.dpe <- mle_penalty(kfold.dpe)
# prediction
pred.kfold.dpe <- predict_gp(fit.kfold.dpe, x.test)

# k-fold cross validation with the mse metric
kfold.mse <- gp_cv(y, x, k=4, metric="mse")
# fit
fit.kfold.mse <- mle_penalty(kfold.mse)
# prediction
pred.kfold.mse <- predict_gp(fit.kfold.mse, x.test)
# }



Run the code above in your browser using DataLab