performance

0th

Percentile

Measure performance of prediction.

Measures the quality of a prediction w.r.t. some performance measure.

Usage
performance(pred, measures, task = NULL, model = NULL, feats = NULL)
Arguments
pred

(Prediction) Prediction object.

measures

(Measure | list of Measure) Performance measure(s) to evaluate. Default is the default measure for the task, see here getDefaultMeasure.

task

(Task) Learning task, might be requested by performance measure, usually not needed except for clustering or survival.

model

(WrappedModel) Model built on training data, might be requested by performance measure, usually not needed except for survival.

feats

(data.frame) Features of predicted data, usually not needed except for clustering. If the prediction was generated from a task, you can also pass this instead and the features are extracted from it.

Value

(named numeric). Performance value(s), named by measure(s).

See Also

Other performance: ConfusionMatrix, calculateConfusionMatrix, calculateROCMeasures, estimateRelativeOverfitting, makeCostMeasure, makeCustomResampledMeasure, makeMeasure, measures, setAggregation, setMeasurePars

Aliases
  • performance
Examples
# NOT RUN {
training.set = seq(1, nrow(iris), by = 2)
test.set = seq(2, nrow(iris), by = 2)

task = makeClassifTask(data = iris, target = "Species")
lrn = makeLearner("classif.lda")
mod = train(lrn, task, subset = training.set)
pred = predict(mod, newdata = iris[test.set, ])
performance(pred, measures = mmce)

# Compute multiple performance measures at once
ms = list("mmce" = mmce, "acc" = acc, "timetrain" = timetrain)
performance(pred, measures = ms, task, mod)
# }
Documentation reproduced from package mlr, version 2.13, License: BSD_2_clause + file LICENSE

Community examples

Looks like there are no examples yet.