Member-only story
Deep Dive on Accumulated Local Effect Plots (ALEs) with Python
Intuition, algorithm and code for using ALEs to explain machine learning models
Highly correlated features can wreak havoc on your model interpretations. They violate the assumptions of many XAI methods and make it difficult to understand the nature of a feature’s relationship with the target. At the same time, it is not always possible to remove them without affecting performance. We need a method that can provide clear interpretations, even with multicollinearity. Thankfully we can rely on ALEs [1].
ALEs are a global interpretation method. Like PDPs they show the trends captured by the model. That is if a feature has a linear, non-linear or no relationship with the target variable. However, we will see that the method of identifying these trends is quite different. We will:
- Give you the intuition for how ALEs are created.
- Formally define the algorithm used to create ALEs.
- Apply ALEs using the Alibi Explain package.
We will see that, unlike other XAI methods like SHAP, LIME, ICE Plots and Friedman’s H-stat, ALEs give interpretations that are robust to multicollinearity.