Bayesian analysis used here to answer the question: "when looking at resampling results, are the differences between models 'real?'" To answer this, a model can be created were the outcome is the resampling statistics (e.g. accuracy or RMSE). These values are explained by the model types. In doing this, we can get parameter estimates for each model's affect on performance and make statistical (and practical) comparisons between models.
perf_mod(object, ...)# S3 method for rset
perf_mod(object, transform = no_trans, hetero_var = FALSE,
...)
# S3 method for vfold_cv
perf_mod(object, transform = no_trans,
hetero_var = FALSE, ...)
# S3 method for resamples
perf_mod(object, transform = no_trans,
hetero_var = FALSE, metric = object$metrics[1], ...)
# S3 method for data.frame
perf_mod(object, transform = no_trans,
hetero_var = FALSE, ...)
A data frame or an rset
object (such as
rsample::vfold_cv()
) containing the id
column(s) and at least
two numeric columns of model performance statistics (e.g.
accuracy). Additionally, an object from caret::resamples
can be used.
Additonal arguments to pass to rstanarm::stan_glmer()
such as verbose
, prior
, seed
, family
, etc.
An named list of transformation and inverse
transformation fuctions. See logit_trans()
as an example.
A logical; if TRUE
, then different
variances are estimated for each model group. Otherwise, the
same variance is used for each group. Estimating heterogeneous
variances may slow or prevent convergence.
A single character value for the statstic from
the resamples
object that should be analyzed.
An object of class perf_mod
.
These functions can be used to process and analyze matched resampling statistics from different models using a Bayesian generalized linear model with effects for the model and the resamples.
By default, a generalized linear model with Gaussian error and an identity link is fit to the data and has terms for the predictive model grouping variable. In this way, the performance metrics can be compared between models.
Additionally, random effect terms are also used. For most resampling methods (except repeated V-fold cross-validation), a simple random intercept model its used with an exchangeable (i.e. compound-symmetric) variance structure. In the case of repeated cross-validation, two random intercept terms are used; one for the repeat and another for the fold within repeat. These also have exchangeable correlation structures.
The above model specification assumes that the variance in the
performance metrics is the same across models. However, this is
unlikely to be true in some cases. For example, for simple
binomial accuracy, it well know that the variance is highest
when the accuracy is near 50 percent. When the argument
hetero_var = TRUE
, the variance structure uses random
intercepts for each model term. This may produce more realistic
posterior distributions but may take more time to converge.
Also, as shown in the package vignettes, the Gaussian assumption
make be unrealistic. In this case, there are at least two
approaches that can be used. First, the outcome statistics can
be transformed prior to fitting the model. For example, for
accuracy, the logit transformation can be used to convert the
outcome values to be on the real line and a model is fit to
these data. Once the posterior distributions are computed, the
inverse transformation can be used to put them back into the
original units. The transform
argument can be used to do this.
The second approach would be to use a different error
distribution from the exponential family. For RMSE values, the
Gamma distribution may produce better results at the expense of
model computational complexity. This can be achieved by passing
the family
argument to perf_mod
as one might with the
glm
function.
# NOT RUN {
# Example objects from the "Getting Started" vignette at
# https://topepo.github.io/tidyposterior/articles/Getting_Started.html
file <- system.file("examples", "roc_model.RData", package = "tidyposterior")
load(file)
roc_model
# Summary method shows the underlying `stan` model
summary(roc_model)
# }
Run the code above in your browser using DataLab