
See the documentation for your object's class:
compare_performance()
computes indices of model performance for
different models at once and hence allows comparison of indices across models.
compare_performance(..., metrics = "all", rank = FALSE, verbose = TRUE)model_performance(model, ...)
performance(model, ...)
Arguments passed to or from other methods, resp. for
compare_performance()
, one or multiple model objects (also of
different classes).
Can be "all"
or a character vector of metrics to be computed.
See related documentation of object's class for details.
Logical, if TRUE
, models are ranked according to "best overall
model performance". See 'Details'.
Toggle off warnings.
Statistical model.
For model_performance()
, a data frame (with one row) and one
column per "index" (see metrics
). For compare_performance()
,
the same data frame with one row per model.
If all models were fit from the same data, compare_performance()
returns an additional column named BF
, which shows the Bayes factor
(see bayesfactor_models
) for each model against
the denominator model. The first model is used as denominator model,
and its Bayes factor is set to NA
to indicate the reference model.
When rank = TRUE
, a new column Performance_Score
is returned. This
score ranges from 0% to 100%, higher values indicating better model performance.
Calculation is based on normalizing all indices (i.e. rescaling them to a
range from 0 to 1), and taking the mean value of all indices for each model.
This is a rather quick heuristic, but might be helpful as exploratory index.
In particular when models are of different types (e.g. mixed models, classical
linear models, logistic regression, ...), not all indices will be computed
for each model. In case where an index can't be calculated for a specific
model type, this model gets an NA
value. All indices that have any
NA
s are excluded from calculating the performance score.
There is a plot()
-method for compare_performance()
,
which creates a "spiderweb" plot, where the different indices are
normalized and larger values indicate better model performance.
Hence, points closer to the center indicate worse fit indices
(see online-documentation
for more details).
# NOT RUN {
library(lme4)
m1 <- lm(mpg ~ wt + cyl, data = mtcars)
model_performance(m1)
m2 <- glm(vs ~ wt + mpg, data = mtcars, family = "binomial")
m3 <- lmer(Petal.Length ~ Sepal.Length + (1 | Species), data = iris)
compare_performance(m1, m2, m3)
data(iris)
lm1 <- lm(Sepal.Length ~ Species, data = iris)
lm2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris)
lm3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris)
compare_performance(lm1, lm2, lm3)
compare_performance(lm1, lm2, lm3, rank = TRUE)
# }
Run the code above in your browser using DataLab