generateCalibrationData(obj, breaks = "Sturges", groups = NULL, task.id = NULL)Prediction | (list of) ResampleResult | BenchmarkResult]
Single prediction object, list of them, single resample result, list of them, or a benchmark result.
In case of a list probably produced by different learners you want to compare, then
name the list with the names you want to see in the plots, probably
learner shortnames or ids.character(1) | numeric]
If character(1), the algorithm to use in generating probability bins.
See hist for details.
If numeric, the cut points for the bins.
Default is Sturges.integer(1)]
The number of bins to construct.
If specified, breaks is ignored.
Default is NULL.character(1)]
Selected task in BenchmarkResult to do plots for, ignored otherwise.
Default is first task.list containing:plotCalibrationOther generate_plot_data: generateCritDifferencesData,
generateFilterValuesData,
generateFunctionalANOVAData,
generateLearningCurveData,
generatePartialDependenceData,
generateThreshVsPerfData,
getFilterValues