iml (version 0.9.0)

Shapley: Prediction explanations with game theory

Description

Shapley computes feature contributions for single predictions with the Shapley value, an approach from cooperative game theory. The features values of an instance cooperate to achieve the prediction. The Shapley value fairly distributes the difference of the instance's prediction and the datasets average prediction among the features.

Format

R6Class object.

Usage

shapley = Shapley$new(predictor, x.interest = NULL, sample.size = 100)

plot(shapley) shapley$results print(shapley) shapley$explain(x.interest)

Arguments

For Shapley$new():

predictor:

(Predictor) The object (created with Predictor$new()) holding the machine learning model and the data.

x.interest:

(data.frame) Single row with the instance to be explained.

sample.size:

(`numeric(1)`) The number of Monte Carlo samples for estimating the Shapley value.

Fields

predictor:

(Predictor) The object (created with Predictor$new()) holding the machine learning model and the data.

results:

(data.frame) data.frame with the Shapley values (phi) per feature.

sample.size:

(`numeric(1)`) The number of times coalitions/marginals are sampled from data X. The higher the more accurate the explanations become.

x.interest:

(data.frame) Single row with the instance to be explained.

y.hat.interest:

(numeric) Predicted value for instance of interest

y.hat.average:

(`numeric(1)`) Average predicted value for data X

Methods

explain(x.interest)

method to set a new data point which to explain.

plot()

method to plot the Shapley value. See plot.Shapley

clone()

[internal] method to clone the R6 object.

initialize()

[internal] method to initialize the R6 object.

Details

For more details on the algorithm see https://christophm.github.io/interpretable-ml-book/shapley.html

References

Strumbelj, E., Kononenko, I. (2014). Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems, 41(3), 647-665. https://doi.org/10.1007/s10115-013-0679-x

See Also

Shapley

A different way to explain predictions: LocalModel

Examples

Run this code
# NOT RUN {
if (require("rpart")) {
# First we fit a machine learning model on the Boston housing data
data("Boston", package  = "MASS")
rf =  rpart(medv ~ ., data = Boston)
X = Boston[-which(names(Boston) == "medv")]
mod = Predictor$new(rf, data = X)

# Then we explain the first instance of the dataset with the Shapley method:
x.interest = X[1,]
shapley = Shapley$new(mod, x.interest = x.interest)
shapley

# Look at the results in a table
shapley$results
# Or as a plot
plot(shapley)

# Explain another instance
shapley$explain(X[2,])
plot(shapley)
# }
# NOT RUN {
# Shapley() also works with multiclass classification
rf = rpart(Species ~ ., data = iris)
X = iris[-which(names(iris) == "Species")]
mod = Predictor$new(rf, data = X, type = "prob")

# Then we explain the first instance of the dataset with the Shapley() method:
shapley = Shapley$new(mod, x.interest = X[1,])
shapley$results
plot(shapley) 

# You can also focus on one class
mod = Predictor$new(rf, data = X, type = "prob", class = "setosa")
shapley = Shapley$new(mod, x.interest = X[1,])
shapley$results
plot(shapley) 
# }
# NOT RUN {
}
# }

Run the code above in your browser using DataCamp Workspace