llama (version 0.9.2)

plot: Plot convenience functions to visualise selectors

Description

Functions to plot the performance of selectors and compare them to others.

Usage

perfScatterPlot(metric, modelx, modely, datax, datay=datax,
    addCostsx=NULL, addCostsy=NULL, pargs=NULL, …)

Arguments

metric

the metric used to evaluate the model. Can be one of misclassificationPenalties, parscores or successes.

modelx

the algorithm selection model to be plotted on the x axis. Can be either a model returned by one of the model-building functions or a function that returns predictions such as vbs or the predictor function of a trained model.

modely

the algorithm selection model to be plotted on the y axis. Can be either a model returned by one of the model-building functions or a function that returns predictions such as vbs or the predictor function of a trained model.

datax

the data used to evaluate modelx. Will be passed to the metric function.

datay

the data used to evaluate modely. Can be omitted if the same as for modelx. Will be passed to the metric function.

addCostsx

whether to add feature costs for modelx. You should not need to set this manually, the default of NULL will have LLAMA figure out automatically depending on the model whether to add costs or not. This should always be true (the default) except for comparison algorithms (i.e. single best and virtual best).

addCostsy

whether to add feature costs for modely. You should not need to set this manually, the default of NULL will have LLAMA figure out automatically depending on the model whether to add costs or not. This should always be true (the default) except for comparison algorithms (i.e. single best and virtual best).

pargs

any arguments to be passed to geom_points.

any additional arguments to be passed to the metrics. For example the penalisation factor for parscores.

Value

A ggplot object.

Details

perfScatterPlot creates a scatter plot that compares the performances of two algorithm selectors. It plots the performance on each instance in the data set for modelx on the x axis versus modely on the y axis. In addition, a diagonal line is drawn to denote the line of equal performance for both selectors.

See Also

misclassificationPenalties, parscores, successes

Examples

Run this code
# NOT RUN {
if(Sys.getenv("RUN_EXPENSIVE") == "true") {
data(satsolvers)
folds = cvFolds(satsolvers)
model = classify(classifier=makeLearner("classif.J48"), data=folds)

# Simple plot to compare our selector to the single best in terms of PAR10 score
library(ggplot2)
perfScatterPlot(parscores,
        model, singleBest,
        folds, satsolvers) +
    scale_x_log10() + scale_y_log10() +
    xlab("J48") + ylab("single best")

# additional aesthetics for points
perfScatterPlot(parscores,
        model, singleBest,
        folds, satsolvers,
        pargs=aes(colour = scorex)) +
    scale_x_log10() + scale_y_log10() +
    xlab("J48") + ylab("single best")
}
# }

Run the code above in your browser using DataLab