scale_x_log10.
plotBMRSummary(bmr, measure = NULL, trafo = "none", order.tsks = NULL, pointsize = 4L, jitter = 0.05, pretty.names = TRUE)BenchmarkResult]
Benchmark result.Measure]
Performance measure.
Default is the first measure used in the benchmark experiment.character(1)]
Currently either none or rank, the latter performing a rank transformation
(with average handling of ties) of the scores per task.
NB: You can add always add scale_x_log10 to the result to put scores on a log scale.
Default is none.character(n.tasks)]
Character vector with task.ids in new order.numeric(1)]
Point size for ggplot2 geom_point for data points.
Default is 4.numeric(1)]
Small vertical jitter to deal with overplotting in case of equal scores.
Default is 0.05.logical{1}]
Whether to use the short name of the learner instead of its ID in labels. Defaults to TRUE.BenchmarkResult,
benchmark,
convertBMRToRankMatrix,
friedmanPostHocTestBMR,
friedmanTestBMR,
generateCritDifferencesData,
getBMRAggrPerformances,
getBMRFeatSelResults,
getBMRFilteredFeatures,
getBMRLearnerIds,
getBMRLearnerShortNames,
getBMRLearners,
getBMRMeasureIds,
getBMRMeasures, getBMRModels,
getBMRPerformances,
getBMRPredictions,
getBMRTaskIds,
getBMRTuneResults,
plotBMRBoxplots,
plotBMRRanksAsBarChart,
plotCritDifferencesOther plot: plotBMRBoxplots,
plotBMRRanksAsBarChart,
plotCalibration,
plotCritDifferences,
plotFilterValuesGGVIS,
plotFilterValues,
plotLearningCurveGGVIS,
plotLearningCurve,
plotPartialDependenceGGVIS,
plotPartialDependence,
plotROCCurves,
plotThreshVsPerfGGVIS,
plotThreshVsPerf