this method offers a variety of visualisations to compare implemented calibration models
visualize_calibratR(calibrate_object, visualize_models = FALSE,
plot_distributions = FALSE, rd_partitions = FALSE,
training_set_calibrated = FALSE)
the list component calibration_models
from the calibrate
method
returns the list components plot_calibration_models
and plot_single_models
returns a density distribution plot of the calibrated predictions after CV (External) or without CV (internal)
returns a reliability diagram for each model
returns a list of ggplots. Each plot represents the calibrated predictions by the respective calibration model of the training set.
If the list object predictions
in the calibrate_object
is empty, training_set_calibrated
is returned as NULL.
An object of class list, with the following components:
returns a histogram of the original ML score distribution
returns a list of density distribution plots for each calibration method, the original and the two input-preprocessing methods scaling and transforming. The plot visualises the density distribution of the calibrated predictions of the training set. In this case, training and test set values are identical, so be careful to evaluate the plots.
returns a list of density distribution plots for each calibration method, the original and the two input-preprocessing methods scaling and transforming. The plot visualises the density distribution of the calibrated predictions, that were returned during Cross Validation. If more than one repetition of CV was performed, run number 1 is evaluated
maps the original ML scores to their calibrated prediction estimates for each model.
This enables easy model comparison over the range of ML scores See also compare_models_visual
.
returns a list of ggplots for each calibration model, also mapping the original ML scores to their calibrated prediction. Significance values are indicated.
See also plot_model
returns a list of reliability diagrams for each of the implemented calibration models and the two input-preprocessing methods "scaled" and "transformed". The returned plot visualises the calibrated predictions that
were returned for the test set during each of the n run of the n-times repeated CV. Each grey line represents one of the n runs. The blue line represents the median of all calibrated bin predictions.
Insignificant bin estimates are indicated with "ns". If no CV was performed during calibration model building using the calibrate
method, rd_plot
is returned as NULL
returns a list of boxplots for the calibration error metrics ECE, MCE, CLE and RMSE. The n values for each model represent the obtained error values during the
n times repeated CV. If no CV was performed during calibration model building using the calibrate
method, calibration_error
is returned as NULL
returns a list of boxplots for the discrimination error AUC, sensitivity and specificity. The n values for each model represent the obtained error values during the
n times repeated CV. If no CV was performed during calibration model building using the calibrate
method, discrimination_error
is returned as NULL
If no CV was performed during calibration model building using the calibrate
method, cle_class_specific_error
is returned as NULL
returns a list of ggplots. Each plot represents the calibrated predictions by the respective calibration model of the training set.
If the list object predictions
in the calibrate_object
is empty, training_set_calibrated
is returned as NULL.
plots the the returned conditional probability p(x|Class) values of the GUESS_1 model
plots the the returned conditional probability p(x|Class) values of the GUESS_2 model
ggplot
,geom_density
,aes
,scale_colour_manual
,scale_fill_manual
,labs
,geom_point
,geom_hline
,theme
,element_text
melt
# NOT RUN {
## Loading dataset in environment
data(example)
calibration_model <- example$calibration_model
visualisation <- visualize_calibratR(calibration_model, plot_distributions=FALSE,
rd_partitions=FALSE, training_set_calibrated=FALSE)
# }
Run the code above in your browser using DataLab