lime v0.5.1


Monthly downloads



Local Interpretable Model-Agnostic Explanations

When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) <arXiv:1602.04938>.



Travis-CI Build
Status AppVeyor Build
Status CRAN\_Release\_Badge CRAN\_Download\_Badge Coverage

There once was a package called lime,

Whose models were simply sublime,

It gave explanations for their variations,

one observation at a time.

lime-rick by Mara Averick

This is an R port of the Python lime package ( developed by the authors of the lime (Local Interpretable Model-agnostic Explanations) approach for black-box model explanations. All credits for the invention of the approach goes to the original developers.

The purpose of lime is to explain the predictions of black box classifiers. What this means is that for any given prediction and any given classifier it is able to determine a small set of features in the original data that has driven the outcome of the prediction. To learn more about the methodology of lime read the paper and visit the repository of the original implementation.

The lime package for R does not aim to be a line-by-line port of its Python counterpart. Instead it takes the ideas laid out in the original code and implements them in an API that is idiomatic to R.

An example

Out of the box lime supports a long range of models, e.g. those created with caret, parsnip, and mlr. Support for unsupported models are easy to achieve by adding a predict_model and model_type method for the given model.

The following shows how a random forest model is trained on the iris data set and how lime is then used to explain a set of new observations:


# Split up the data set
iris_test <- iris[1:5, 1:4]
iris_train <- iris[-(1:5), 1:4]
iris_lab <- iris[[5]][-(1:5)]

# Create Random Forest model on iris data
model <- train(iris_train, iris_lab, method = 'rf')

# Create an explainer object
explainer <- lime(iris_train, model)

# Explain new observation
explanation <- explain(iris_test, explainer, n_labels = 1, n_features = 2)

# The output is provided in a consistent tabular format and includes the
# output from the model.
#> # A tibble: 10 x 13
#>    model_type case  label label_prob model_r2 model_intercept
#>    <chr>      <chr> <chr>      <dbl>    <dbl>           <dbl>
#>  1 classific… 1     seto…          1    0.680           0.120
#>  2 classific… 1     seto…          1    0.680           0.120
#>  3 classific… 2     seto…          1    0.675           0.125
#>  4 classific… 2     seto…          1    0.675           0.125
#>  5 classific… 3     seto…          1    0.682           0.122
#>  6 classific… 3     seto…          1    0.682           0.122
#>  7 classific… 4     seto…          1    0.667           0.128
#>  8 classific… 4     seto…          1    0.667           0.128
#>  9 classific… 5     seto…          1    0.678           0.121
#> 10 classific… 5     seto…          1    0.678           0.121
#> # … with 7 more variables: model_prediction <dbl>, feature <chr>,
#> #   feature_value <dbl>, feature_weight <dbl>, feature_desc <chr>,
#> #   data <list>, prediction <list>

# And can be visualised directly

lime also supports explaining image and text models. For image explanations the relevant areas in an image can be highlighted:

explanation <- .load_image_example()


Here we see that the second most probably class is hardly true, but is due to the model picking up waxy areas of the produce and interpreting them as wax-light surface.

For text the explanation can be shown by highlighting the important words. It even includes a shiny application for interactively exploring text models:

interactive text explainer


lime is available on CRAN and can be installed using the standard approach:


To get the development version, install from GitHub instead:

# install.packages('devtools')

Code of Conduct

Please note that the ‘lime’ project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Functions in lime

Name Description
plot_superpixels Test super pixel segmentation
plot_features Plot the features in an explanation
plot_image_explanation Display image explanations as superpixel areas
slic Segment image into superpixels
train_sentences Sentence corpus - train part
plot_text_explanations Plot text explanations
stop_words_sentences Stop words list
test_sentences Sentence corpus - test part
lime Create a model explanation function based on training data
.load_image_example Load an example image explanation
as_classifier Indicate model type to lime
.load_text_example Load an example text explanation
default_tokenize Default function to tokenize
explain Explain model predictions
interactive_text_explanations Interactive explanations
lime-package lime: Local Interpretable Model-Agnostic Explanations
plot_explanations Plot a condensed overview of all explanations
model_support Methods for extending limes model support
No Results!

Vignettes of lime

No Results!

Last month downloads


Type Package
License MIT + file LICENSE
Encoding UTF-8
LazyData true
RoxygenNote 6.1.1
VignetteBuilder knitr
LinkingTo Rcpp, RcppEigen
NeedsCompilation yes
Packaged 2019-11-12 07:34:00 UTC; thomas
Repository CRAN
Date/Publication 2019-11-12 08:20:02 UTC

Include our badge in your README