
Last chance! 50% off unlimited learning
Sale ends in
See the documentation for your object's class:
compare_performance()
computes indices of model performance for
different models at once and hence allows comparison of indices across models.
compare_performance(..., metrics = "all", verbose = TRUE)model_performance(model, ...)
Arguments passed to or from other methods, resp. for
compare_performance()
, one or multiple model objects (also of
different classes).
Can be "all"
or a character vector of metrics to be computed.
See related documentation of object's class for details.
Toggle off warnings.
Statistical model.
For model_performance()
, a data frame (with one row) and one
column per "index" (see metrics
). For compare_performance()
,
the same data frame with one row per model.
If all models were fit from the same data, compare_performance()
returns an additional column named BF
, which shows the Bayes factor
(see bayesfactor_models
) for each model against
the denominator model. The first model is used as denominator model,
and its Bayes factor is set to NA
to indicate the reference model.
# NOT RUN {
library(lme4)
m1 <- lm(mpg ~ wt + cyl, data = mtcars)
model_performance(m1)
m2 <- glm(vs ~ wt + mpg, data = mtcars, family = "binomial")
m3 <- lmer(Petal.Length ~ Sepal.Length + (1 | Species), data = iris)
compare_performance(m1, m2, m3)
data(iris)
lm1 <- lm(Sepal.Length ~ Species, data = iris)
lm2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris)
lm3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris)
compare_performance(lm1, lm2, lm3)
# }
Run the code above in your browser using DataLab