© Scott Lundberg / Corgarashu — Adobe Stock

Fairness and Bias

Avoid the black-box use of fairness metrics in machine learning by applying modern explainable AI methods to measures of fairness.

This hands-on article connects explainable AI with fairness measures and shows how modern explainability methods can enhance the usefulness of quantitative fairness metrics. By using SHAP (a popular explainable AI tool) we can decompose measures of fairness and allocate responsibility for any observed disparity among each of the model’s input features. Explaining these quantitative fairness metrics can reduce the concerning tendency to rely on them as opaque standards of fairness, and instead promote their informed use as tools for understanding how model behavior differs between groups.

Quantitative fairness metrics seek to bring mathematical precision to the definition of fairness in…


Model Interpretability

This is a story about the danger of interpreting your machine learning model incorrectly, and the value of interpreting it correctly. If you have found the robust accuracy of ensemble tree models such as gradient boosting machines or random forests attractive, but also need to interpret them, then I hope you find this informative and helpful.

Imagine we are tasked with predicting a person’s financial status for a bank. The more accurate our model, the more money the bank makes, but since this prediction is used for loan applications we are also legally required to provide an explanation for why…

Scott Lundberg

Senior Researcher at Microsoft Research

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store