lime v0.5.0

0

Monthly downloads

0th

Percentile

Local Interpretable Model-Agnostic Explanations

When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) <arXiv:1602.04938>.

Readme

lime

Travis-CI Build
Status AppVeyor Build
Status CRAN\_Release\_Badge CRAN\_Download\_Badge Coverage
Status

There once was a package called lime,

Whose models were simply sublime,

It gave explanations for their variations,

one observation at a time.

lime-rick by Mara Averick


This is an R port of the Python lime package (https://github.com/marcotcr/lime) developed by the authors of the lime (Local Interpretable Model-agnostic Explanations) approach for black-box model explanations. All credits for the invention of the approach goes to the original developers.

The purpose of lime is to explain the predictions of black box classifiers. What this means is that for any given prediction and any given classifier it is able to determine a small set of features in the original data that has driven the outcome of the prediction. To learn more about the methodology of lime read the paper and visit the repository of the original implementation.

The lime package for R does not aim to be a line-by-line port of its Python counterpart. Instead it takes the ideas laid out in the original code and implements them in an API that is idiomatic to R.

An example

Out of the box lime supports a long range of models, e.g. those created with caret, parsnip, and mlr. Support for unsupported models are easy to achieve by adding a predict_model and model_type method for the given model.

The following shows how a random forest model is trained on the iris data set and how lime is then used to explain a set of new observations:

library(caret)
library(lime)

# Split up the data set
iris_test <- iris[1:5, 1:4]
iris_train <- iris[-(1:5), 1:4]
iris_lab <- iris[[5]][-(1:5)]

# Create Random Forest model on iris data
model <- train(iris_train, iris_lab, method = 'rf')

# Create an explainer object
explainer <- lime(iris_train, model)

# Explain new observation
explanation <- explain(iris_test, explainer, n_labels = 1, n_features = 2)

# The output is provided in a consistent tabular format and includes the
# output from the model.
explanation
#> # A tibble: 10 x 13
#>    model_type case  label label_prob model_r2 model_intercept
#>    <chr>      <chr> <chr>      <dbl>    <dbl>           <dbl>
#>  1 classific… 1     seto…          1    0.680           0.120
#>  2 classific… 1     seto…          1    0.680           0.120
#>  3 classific… 2     seto…          1    0.675           0.125
#>  4 classific… 2     seto…          1    0.675           0.125
#>  5 classific… 3     seto…          1    0.682           0.122
#>  6 classific… 3     seto…          1    0.682           0.122
#>  7 classific… 4     seto…          1    0.667           0.128
#>  8 classific… 4     seto…          1    0.667           0.128
#>  9 classific… 5     seto…          1    0.678           0.121
#> 10 classific… 5     seto…          1    0.678           0.121
#> # … with 7 more variables: model_prediction <dbl>, feature <chr>,
#> #   feature_value <dbl>, feature_weight <dbl>, feature_desc <chr>,
#> #   data <list>, prediction <list>

# And can be visualised directly
plot_features(explanation)

lime also supports explaining image and text models. For image explanations the relevant areas in an image can be highlighted:

explanation <- .load_image_example()

plot_image_explanation(explanation)

Here we see that the second most probably class is hardly true, but is due to the model picking up waxy areas of the produce and interpreting them as wax-light surface.

For text the explanation can be shown by highlighting the important words. It even includes a shiny application for interactively exploring text models:

interactive text explainer

Installation

lime is available on CRAN and can be installed using the standard approach:

install.packages('lime')

To get the development version, install from GitHub instead:

# install.packages('devtools')
devtools::install_github('thomasp85/lime')

Code of Conduct

Please note that the ‘lime’ project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Functions in lime

Name Description
default_tokenize Default function to tokenize
plot_image_explanation Display image explanations as superpixel areas
plot_features Plot the features in an explanation
as_classifier Indicate model type to lime
.load_text_example Load an example text explanation
explain Explain model predictions
interactive_text_explanations Interactive explanations
stop_words_sentences Stop words list
test_sentences Sentence corpus - test part
model_support Methods for extending limes model support
plot_explanations Plot a condensed overview of all explanations
lime-package lime: Local Interpretable Model-Agnostic Explanations
.load_image_example Load an example image explanation
plot_text_explanations Plot text explanations
train_sentences Sentence corpus - train part
lime Create a model explanation function based on training data
plot_superpixels Test super pixel segmentation
slic Segment image into superpixels
No Results!

Vignettes of lime

Name
Understanding_lime.Rmd
No Results!

Last month downloads

Details

Type Package
Date 2019-06-13
License MIT + file LICENSE
URL https://lime.data-imaginist.com
BugReports https://github.com/thomasp85/lime/issues
Encoding UTF-8
LazyData true
RoxygenNote 6.1.1
VignetteBuilder knitr
LinkingTo Rcpp, RcppEigen
NeedsCompilation yes
Packaged 2019-06-24 10:16:24 UTC; thomas
Repository CRAN
Date/Publication 2019-06-24 10:50:03 UTC

Include our badge in your README

[![Rdoc](http://www.rdocumentation.org/badges/version/lime)](http://www.rdocumentation.org/packages/lime)