🚀 原文链接

This is an introduction to explaining machine learning models with Shapley values. Shapley values are a widely used approach from cooperative game theory that come with desirable properties. This tutorial is designed to help build a solid understanding of how to compute and interpet Shapley-based explanations of machine learning models. We will take a practical hands-on approach, using the shap Python package to explain progressively more complex models. This is a living document, and serves as an introduction to the shap Python package. So if you have feedback or contributions please open an issue or pull request to make this tutorial better!

注意:本教程需要用到库

  1. $ pip install interpret transformers nlp

1. Explaining a linear regression model

Before using Shapley values to explain complicated models, it is helpful to understand how they work for simple models. One of the simplest model types is standard linear regression, and so below we train a linear regression model on the classic boston housing dataset. This dataset consists of 506 neighboorhood regions around Boston in 1978, where our goal is to predict the median home price (in thousands) in each neighboorhood from 14 different features:

  1. CRIM - per capita crime rate(人均犯罪率) by town
  2. ZN - proportion of residential land zoned(住宅用地分区) for lots over 25,000 sq.ft.
  3. INDUS - proportion of non-retail business acres(非零售业务) per town.
  4. CHAS - Charles River dummy variable (1 if tract bounds river(河道边界); 0 otherwise)
  5. NOX - nitric oxides concentration(一氧化氮浓度) (parts per 10 million)
  6. RM - average number of rooms per dwelling(住宅)
  7. AGE - proportion of owner-occupied(房主自用的) units built prior to 1940
  8. DIS - weighted distances to five Boston employment centres
  9. RAD - index of accessibility to radial highways
  10. TAX - full-value property-tax(房产税) rate per $10,000
  11. PTRATIO - pupil-teacher(学生教师) ratio by town
  12. B - 1000(Bk - 0.63)^2 where Bk is the proportion of blacks(黑人) by town
  13. LSTAT - % lower status of the population
  14. MEDV - Median value of owner-occupied homes in $1000’s
  1. import pandas as pd
  2. import shap
  3. import sklearn
  4. # a classic housing price dataset
  5. X, y = shap.datasets.boston()
  6. X100 = shap.utils.sample(X, 100) # 100 instances for use as the background distribution
  7. # a simple linear model
  8. model = sklearn.linear_model.LinearRegression()
  9. model.fit(X, y)
  10. # LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)

1.1 Examining the model coefficients

The most common way of understanding a linear model is to examine the coefficients learned for each feature. These coefficients tell us how much the model output changes when we change each of the input features:

  1. print("Model coefficients:\n")
  2. for i in range(X.shape[1]):
  3. print(X.columns[i], "=", model.coef_[i].round(4))

输出结果:

  1. Model coefficients:
  2. CRIM = -0.108
  3. ZN = 0.0464
  4. INDUS = 0.0206
  5. CHAS = 2.6867
  6. NOX = -17.7666
  7. RM = 3.8099
  8. AGE = 0.0007
  9. DIS = -1.4756
  10. RAD = 0.306
  11. TAX = -0.0123
  12. PTRATIO = -0.9527
  13. B = 0.0093
  14. LSTAT = -0.5248

While coefficients are great for telling us what will happen when we change the value of an input feature, by themselves, they are not a great way to measure the overall importance of a feature. This is because the value of each coefficient depends on the scale of the input features. If for example we were to measure the age of a home in minutes instead of years, then the coefficients for the AGE feature would become 0.0007∗365∗24∗60=367.92. Clearly the number of minutes since a house was built is not more important than the number of years, yet its coefficient value is much larger. This means that the magnitude(重要性) of a coefficient is not necessarily a good measure of a feature’s importance in a linear model.

1.2 A more complete picture using partial dependence plots

To understand a feature’s importance in a model it is necessary to understand both how changing that feature impacts the model’s output, and also the distribution of that feature’s values. To visualize this for a linear model we can build a classical partial dependence plot and show the distribution of feature values as a histogram on the x-axis:

  1. shap.plots.partial_dependence(
  2. "RM", model.predict, X100, ice=False,
  3. model_expected_value=True, feature_expected_value=True
  4. )

🙈 An introduction to explainable AI with Shapley values - 图1 :::tips 🔖 Note:partial_dependence 参数解释 ::: :::info

  • iceindividual conditional expectation,表示 100 个样本其他特征不变,只变动 RM 的值所得出预测值,即会有 100 条线。
  • model_expected_value:是否显示模型预测值均值
  • feature_expected_value:是否显示该特征均值 :::

The gray horizontal line in the plot above represents the expected value of the model when applied to the boston housing dataset. The vertical gray line represents the average value of the RM feature. Note that the blue partial dependence plot line (which the is average value of the model output when we fix the RM feature to a given value) always passes through the interesection(相交) of the two gray expected value lines. We can consider this intersection point as the “center” of the partial dependence plot with respect to the data distribution. The impact of this centering will become clear when we turn to Shapley values next.

:::tips 💡 补充:数学层面上,上图为什么完全呈现一条直线? ::: :::info 在线性回归模型中,每个样本的 SHAP value 计算公式为:
🙈 An introduction to explainable AI with Shapley values - 图2
线性回归模型的特征系数都是固定的,导致了 🙈 An introduction to explainable AI with Shapley values - 图3 一定是随着 RM 增长而单调的。 :::

不信的同学,也可以通过下面代码看一下 5 个样本的 SHAP 值,是不是完全对的上

  1. (shap_values[sample_ind:sample_ind+5, :].data - X100.mean().values) * model.coef_

1.3 Reading SHAP values from partial dependence plots

The core idea behind Shapley value based explanations of machine learning models is to use fair allocation results from cooperative game theory to allocate credit for a model’s output 🙈 An introduction to explainable AI with Shapley values - 图4 among its input features. In order to connect game theory with machine learning models it is nessecary to both match a model’s input features with players in a game, and also match the model function with the rules of the game. Since in game theory a player can join or not join a game, we need a way for a feature to “join” or “not join” a model. The most common way to define what it means for a feature to “join” a model is to say that feature has “joined a model” when we know the value of that feature, and it has not joined a model when we don’t know the value of that feature. To evaluate an existing model 🙈 An introduction to explainable AI with Shapley values - 图5 when only a subset 🙈 An introduction to explainable AI with Shapley values - 图6 of features are part of the model we integrate out the other features using a conditional expectated value formulation. This formulation can take two forms:
🙈 An introduction to explainable AI with Shapley values - 图7
or
🙈 An introduction to explainable AI with Shapley values - 图8

In the first form we know the values of the features in 🙈 An introduction to explainable AI with Shapley values - 图9 because we observe them. In the second form we know the values of the features in 🙈 An introduction to explainable AI with Shapley values - 图10 because we set them. In general, the second form is usually preferable, both becuase it tells us how the model would behave if we were to intervene and change its inputs, and also because it is much easier to compute. In this tutorial we will focus entirely on the the second formulation. We will also use the more specific term SHAP values to refer to Shapley values applied to a conditional expectation function of a machine learning model.

SHAP values can be very complicated to compute (they are NP-hard in general), but linear models are so simple that we can read the SHAP values right off a partial dependence plot. When we are explaining a prediction 🙈 An introduction to explainable AI with Shapley values - 图11, the SHAP value for a specific feature 🙈 An introduction to explainable AI with Shapley values - 图12 is just the difference between the expected model output and the partial dependence plot at the feature’s value 🙈 An introduction to explainable AI with Shapley values - 图13:

  1. # compute the SHAP values for the linear model
  2. explainer = shap.Explainer(model.predict, X100)
  3. shap_values = explainer(X) # Note: This is X, not X100.
  4. # make a standard partial dependence plot
  5. sample_ind = 18
  6. shap.partial_dependence_plot(
  7. "RM", model.predict, X100, model_expected_value=True,
  8. feature_expected_value=True, ice=False,
  9. shap_values=shap_values[sample_ind:sample_ind+1,:]
  10. )

💡 注意explainer = shap.Explainer(model.predict, X100) 在 SHAP 0.37.0 是报错的,升级到最新版本即可。 不然可以修改成 explainer = shap.Explainer(model, X100)

🙈 An introduction to explainable AI with Shapley values - 图14

The close correspondence between the classic partial dependence plot and SHAP values means that if we plot the SHAP value for a specific feature across a whole dataset we will exactly trace out(追踪) a mean centered version of the partial dependence plot for that feature:

  1. shap.plots.scatter(shap_values[:, "RM"])

🙈 An introduction to explainable AI with Shapley values - 图15

1.4 The additive nature of Shapley values

One the fundemental properties of Shapley values is that they always sum up to the difference between the game outcome when all players are present and the game outcome when no players are present. For machine learning models this means that SHAP values of all the input features will always sum up to the difference between baseline (expected) model output and the current model output for the prediction being explained. The easiest way to see this is through a waterfall plot that starts our background prior expectation for a home price 🙈 An introduction to explainable AI with Shapley values - 图16, and then adds features one at a time until we reach the current model output 🙈 An introduction to explainable AI with Shapley values - 图17:

  1. # the waterfall_plot shows how we get from shap_values.base_values to model.predict(X)[sample_ind]
  2. shap.plots.waterfall(shap_values[sample_ind], max_display=14)

🙈 An introduction to explainable AI with Shapley values - 图18

2. Explaining an additive regression model

The reason the partial dependence plots of linear models have such a close connection to SHAP values is because each feature in the model is handled independently of every other feature (the effects are just added together). We can keep this additive nature(可加性) while relaxing the linear requirement of straight lines. This results in the well-known class of generalized additive models (GAMs). While there are many ways to train these types of models (like setting an XGBoost model to depth-1), we will use InterpretMLs explainable boosting machines that are specifically designed for this.

  1. # fit a GAM model to the data
  2. import interpret.glassbox
  3. model_ebm = interpret.glassbox.ExplainableBoostingRegressor()
  4. model_ebm.fit(X, y)
  5. # explain the GAM model with SHAP
  6. explainer_ebm = shap.Explainer(model_ebm.predict, X100)
  7. shap_values_ebm = explainer_ebm(X)
  8. # make a standard partial dependence plot with a single SHAP value overlaid
  9. fig,ax = shap.partial_dependence_plot(
  10. "RM", model_ebm.predict, X, model_expected_value=True,
  11. feature_expected_value=True, show=False, ice=False,
  12. shap_values=shap_values_ebm[sample_ind:sample_ind+1, :]
  13. )

🙈 An introduction to explainable AI with Shapley values - 图19

  1. shap.plots.scatter(shap_values_ebm[:, "RM"])

🙈 An introduction to explainable AI with Shapley values - 图20

  1. # the waterfall_plot shows how we get from explainer.expected_value to model.predict(X)[sample_ind]
  2. shap.plots.waterfall(shap_values_ebm[sample_ind], max_display=14)

🙈 An introduction to explainable AI with Shapley values - 图21

  1. # the waterfall_plot shows how we get from explainer.expected_value to model.predict(X)[sample_ind]
  2. shap.plots.beeswarm(shap_values_ebm, max_display=14)

3. Explaining a non-additive boosted tree model

  1. # train XGBoost model
  2. import xgboost
  3. model_xgb = xgboost.XGBRegressor(n_estimators=100, max_depth=2).fit(X, y)
  4. # explain the GAM model with SHAP
  5. explainer_xgb = shap.Explainer(model_xgb, X100)
  6. shap_values_xgb = explainer_xgb(X)
  7. # make a standard partial dependence plot with a single SHAP value overlaid
  8. fig, ax = shap.partial_dependence_plot(
  9. "RM", model_xgb.predict, X, model_expected_value=True,
  10. feature_expected_value=True, show=False, ice=False,
  11. shap_values=shap_values_ebm[sample_ind:sample_ind+1, :]
  12. )

🙈 An introduction to explainable AI with Shapley values - 图22

  1. shap.plots.scatter(shap_values_xgb[:, "RM"])

🙈 An introduction to explainable AI with Shapley values - 图23

  1. shap.plots.scatter(shap_values_xgb[:, "RM"], color=shap_values)

🙈 An introduction to explainable AI with Shapley values - 图24

4. Explaining a linear logistic regression model

  1. import shap
  2. from sklearn.linear_model import LogisticRegression
  3. # a classic adult census dataset price dataset
  4. X_adult, y_adult = shap.datasets.adult()
  5. # a simple linear logistic model
  6. model_adult = LogisticRegression(max_iter=10000)
  7. model_adult.fit(X_adult, y_adult)
  8. def model_adult_proba(x):
  9. return model_adult.predict_proba(x)[:, 1]
  10. def model_adult_log_odds(x):
  11. p = model_adult.predict_log_proba(x)
  12. return p[:, 1] - p[:, 0]

Note that explaining the probability of a linear logistic regression model is not linear in the inputs.

  1. # make a standard partial dependence plot
  2. sample_ind = 18
  3. fig, ax = shap.partial_dependence_plot(
  4. "Capital Gain", model_adult_proba, X_adult, model_expected_value=True,
  5. feature_expected_value=True, show=False, ice=False
  6. )

🙈 An introduction to explainable AI with Shapley values - 图25
If we use SHAP to explain the probability of a linear logistic regression model we see strong interaction effects. This is because a linear logistic regression model NOT additive in the probability space.

  1. # compute the SHAP values for the linear model
  2. background_adult = shap.maskers.Independent(X_adult, max_samples=100)
  3. explainer = shap.Explainer(model_adult_proba, background_adult)
  4. shap_values_adult = explainer(X_adult[:1000])
  5. shap.plots.scatter(shap_values_adult[:, "Age"])

🙈 An introduction to explainable AI with Shapley values - 图26

If we instead explain the log-odds output of the model we see a perfect linear relationship between the models inputs and the model’s outputs. It is important to remember what the units are of the model you are explaining, and that explaining different model outputs can lead to very different views of the model’s behavior.

  1. # compute the SHAP values for the linear model
  2. explainer_log_odds = shap.Explainer(model_adult_log_odds, background_adult)
  3. shap_values_adult_log_odds = explainer_log_odds(X_adult[:1000])
  4. shap.plots.scatter(shap_values_adult_log_odds[:, "Age"])

image.png

  1. # make a standard partial dependence plot
  2. sample_ind = 18
  3. fig,ax = shap.partial_dependence_plot(
  4. "Age", model_adult_log_odds, X_adult, model_expected_value=True,
  5. feature_expected_value=True, show=False, ice=False
  6. )

image.png

5. Explaining a non-additive boosted tree logistic regression model

  1. # train XGBoost model
  2. model = xgboost.XGBClassifier(n_estimators=100, max_depth=2).fit(X_adult, y_adult)
  3. # compute SHAP values
  4. explainer = shap.Explainer(model, background_adult)
  5. shap_values = explainer(X_adult)
  6. # set a display version of the data to use for plotting (has string values)
  7. shap_values.display_data = shap.datasets.adult(display=True)[0].values

By default a SHAP bar plot will take the mean absolute value of each feature over all the instances (rows) of the dataset.

  1. shap.plots.bar(shap_values)

image.png

But the mean absolute value is not the only way to create a global measure of feature importance, we can use any number of transforms. Here we show how using the max absolute value highights the Capital Gain and Capital Loss features, since they have infrewuent but high magnitude effects.

  1. shap.plots.bar(shap_values.abs.max(0))

image.png

If we are willing to deal with a bit more complexity we can use a beeswarm(蜂群) plot to summarize the entire distribution of SHAP values for each feature.

  1. shap.plots.beeswarm(shap_values)

🙈 An introduction to explainable AI with Shapley values - 图31
By taking the absolute value and using a solid color we get a compromise(折中) between the complexity of the bar plot and the full beeswarm plot. Note that the bar plots above are just summary statistics from the values shown in the beeswarm plots below.

  1. shap.plots.beeswarm(shap_values.abs, color="shap_red")

🙈 An introduction to explainable AI with Shapley values - 图32

  1. shap.plots.heatmap(shap_values[:1000])

🙈 An introduction to explainable AI with Shapley values - 图33

  1. shap.plots.scatter(shap_values[:, "Age"])

🙈 An introduction to explainable AI with Shapley values - 图34

  1. shap.plots.scatter(shap_values[:, "Age"], color=shap_values)

🙈 An introduction to explainable AI with Shapley values - 图35

  1. shap.plots.scatter(shap_values[:, "Age"], color=shap_values[:, "Capital Gain"])

🙈 An introduction to explainable AI with Shapley values - 图36

  1. shap.plots.scatter(shap_values[:, "Relationship"], color=shap_values)

🙈 An introduction to explainable AI with Shapley values - 图37

6. Dealing with correlated features

  1. clustering = shap.utils.hclust(X_adult, y_adult)
  2. shap.plots.bar(shap_values, clustering=clustering)

🙈 An introduction to explainable AI with Shapley values - 图38

  1. shap.plots.bar(shap_values, clustering=clustering, clustering_cutoff=0.8)

🙈 An introduction to explainable AI with Shapley values - 图39

7. Explaining a transformers NLP model

This demonstrates how SHAP can be applied to complex model types with highly structured inputs.

  1. import transformers
  2. import nlp
  3. import torch
  4. import numpy as np
  5. import scipy as sp
  6. # load a BERT sentiment analysis model
  7. tokenizer = transformers.DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased")
  8. model = transformers.DistilBertForSequenceClassification.from_pretrained(
  9. "distilbert-base-uncased-finetuned-sst-2-english"
  10. ).cuda()
  11. # define a prediction function
  12. def f(x):
  13. tv = torch.tensor([tokenizer.encode(v, padding='max_length', max_length=500, truncation=True) for v in x]).cuda()
  14. outputs = model(tv)[0].detach().cpu().numpy()
  15. scores = (np.exp(outputs).T / np.exp(outputs).sum(-1)).T
  16. val = sp.special.logit(scores[:,1]) # use one vs rest logit units
  17. return val
  18. # build an explainer using a token masker
  19. explainer = shap.Explainer(f, tokenizer)
  20. # explain the model's predictions on IMDB reviews
  21. imdb_train = nlp.load_dataset("imdb")["train"]
  22. shap_values = explainer(imdb_train[:10], fixed_context=1)
  23. # plot the first sentence's explanation
  24. shap.plots.text(shap_values[0])

image.png

  1. shap.plots.bar(shap_values.abs.mean(0))

🙈 An introduction to explainable AI with Shapley values - 图41

  1. shap.plots.bar(shap_values.abs.sum(0))

🙈 An introduction to explainable AI with Shapley values - 图42