mlr (version 2.10)

generateThreshVsPerfData: Generate threshold vs. performance(s) for 2-class classification.

Description

Generates data on threshold vs. performance(s) for 2-class classification that can be used for plotting.

Usage

generateThreshVsPerfData(obj, measures, gridsize = 100L, aggregate = TRUE,
  task.id = NULL)

Arguments

obj
[(list of) Prediction | (list of) ResampleResult | BenchmarkResult] Single prediction object, list of them, single resample result, list of them, or a benchmark result. In case of a list probably produced by different learners you want to compare, then name the list with the names you want to see in the plots, probably learner shortnames or ids.
measures
[Measure | list of Measure] Performance measure(s) to evaluate. Default is the default measure for the task, see here getDefaultMeasure.
gridsize
[integer(1)] Grid resolution for x-axis (threshold). Default is 100.
aggregate
[logical(1)] Whether to aggregate ResamplePredictions or to plot the performance of each iteration separately. Default is TRUE.
task.id
[character(1)] Selected task in BenchmarkResult to do plots for, ignored otherwise. Default is first task.

Value

[ThreshVsPerfData]. A named list containing the measured performance across the threshold grid, the measures, and whether the performance estimates were aggregated (only applicable for (list of) ResampleResults).

See Also

Other generate_plot_data: generateCalibrationData, generateCritDifferencesData, generateFeatureImportanceData, generateFilterValuesData, generateFunctionalANOVAData, generateLearningCurveData, generatePartialDependenceData, getFilterValues, plotFilterValues Other thresh_vs_perf: plotROCCurves, plotThreshVsPerfGGVIS, plotThreshVsPerf