generateLearningCurveData(learners, task, resampling = NULL, percs = seq(0.1, 1, by = 0.1), measures, stratify = FALSE, show.info = getMlrOption("show.info"))
Learner
]
Learning algorithms which should be compared.Task
]
The task.ResampleDesc
| ResampleInstance
]
Resampling strategy to evaluate the performance measure.
If no strategy is given a default "Holdout" will be performed.numeric
]
Vector of percentages to be drawn from the training split.
These values represent the x-axis.
Internally makeDownsampleWrapper
is used in combination with benchmark
.
Thus for each percentage a different set of observations is drawn resulting in noisy performance measures as the quality of the sample can differ.Measure
]
Performance measures to generate learning curves for, representing the y-axis.logical(1)
]
Only for classification:
Should the downsampled data be stratified according to the target classes?logical(1)
]
Print verbose output on console?
Default is set via configureMlr
.LearningCurveData
]. A list
containing:
Task
]
The task.Measure
]
Performance measures.data.frame
] with columns:
learner
Names of learners.
percentage
Percentages drawn from the training split.
Measure
passed to generateLearningCurveData
.
generateCalibrationData
,
generateCritDifferencesData
,
generateFilterValuesData
,
generatePartialPredictionData
,
generateThreshVsPerfData
,
getFilterValues
Other learning_curve: plotLearningCurveGGVIS
,
plotLearningCurve
r = generateLearningCurveData(list("classif.rpart", "classif.knn"),
task = sonar.task, percs = seq(0.2, 1, by = 0.2),
measures = list(tp, fp, tn, fn), resampling = makeResampleDesc(method = "Subsample", iters = 5),
show.info = FALSE)
plotLearningCurve(r)
Run the code above in your browser using DataLab