
Last chance! 50% off unlimited learning
Sale ends in
Shapley
computes feature contributions for single predictions with the Shapley value, an approach from cooperative game theory.
The features values of an instance cooperate to achieve the prediction.
The Shapley value fairly distributes the difference of the instance's prediction and the datasets average prediction among the features.
R6Class
object.
shapley = Shapley$new(predictor, x.interest = NULL, sample.size = 100, run = TRUE)
plot(shapley) shapley$results print(shapley) shapley$explain(x.interest)
For Shapley$new():
(Predictor) The object (created with Predictor$new()) holding the machine learning model and the data.
(data.frame) Single row with the instance to be explained.
(`numeric(1)`) The number of Monte Carlo samples for estimating the Shapley value.
(`logical(1)`) Should the Interpretation method be run?
(Predictor) The object (created with Predictor$new()) holding the machine learning model and the data.
(data.frame) data.frame with the Shapley values (phi) per feature.
(`numeric(1)`) The number of times coalitions/marginals are sampled from data X. The higher the more accurate the explanations become.
(data.frame) Single row with the instance to be explained.
(numeric) Predicted value for instance of interest
(`numeric(1)`) Average predicted value for data X
method to set a new data point which to explain.
method to plot the Shapley value. See plot.Shapley
run()
[internal] method to run the interpretability method. Use obj$run(force = TRUE)
to force a rerun.
clone()
[internal] method to clone the R6 object.
initialize()
[internal] method to initialize the R6 object.
For more details on the algorithm see https://christophm.github.io/interpretable-ml-book/shapley.html
Strumbelj, E., Kononenko, I. (2014). Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems, 41(3), 647-665. https://doi.org/10.1007/s10115-013-0679-x
A different way to explain predictions: LocalModel
# NOT RUN {
if (require("randomForest")) {
# First we fit a machine learning model on the Boston housing data
data("Boston", package = "MASS")
rf = randomForest(medv ~ ., data = Boston, ntree = 50)
X = Boston[-which(names(Boston) == "medv")]
mod = Predictor$new(rf, data = X)
# Then we explain the first instance of the dataset with the Shapley method:
x.interest = X[1,]
shapley = Shapley$new(mod, x.interest = x.interest)
shapley
# Look at the results in a table
shapley$results
# Or as a plot
plot(shapley)
# Explain another instance
shapley$explain(X[2,])
plot(shapley)
# Shapley() also works with multiclass classification
rf = randomForest(Species ~ ., data= iris, ntree=50)
X = iris[-which(names(iris) == "Species")]
predict.fun = function(object, newdata) predict(object, newdata, type = "prob")
mod = Predictor$new(rf, data = X, predict.fun = predict.fun)
# Then we explain the first instance of the dataset with the Shapley() method:
shapley = Shapley$new(mod, x.interest = X[1,])
shapley$results
plot(shapley)
# You can also focus on one class
mod = Predictor$new(rf, data = X, predict.fun = predict.fun, class = "setosa")
shapley = Shapley$new(mod, x.interest = X[1,])
shapley$results
plot(shapley)
}
# }
Run the code above in your browser using DataLab