Five tools for detecting Algorithmic Bias in AI

Matthew Pennington
LegalTech News & Reviews
5 min readOct 13, 2018

With the release of a cloud tool to detect algorithmic bias in AI systems as well explain automated decision making, IBM becomes the latest provider of machine based learning systems to seek to combat algorithmic bias.

There have been several high profile instances of algorithmic bias recently, the latest being Amazon scrapping a ‘sexist AI’ recruitment tool, with members of the team saying it taught itself that male candidates were preferable.

The problem of algorithmic bias has even lead to the formation of an Algorithmic Justice League (AJL), launched by Joy Buolamwini in 2016 whilst at MIT.

As Buolamwini succinctly puts it: “Because algorithms can have real world consequences, we must demand fairness.”

With other tech giants such as Microsoft and Facebook reportedly working on their own solutions to combat algorithmic bias, we take a look at five tools already out there.

Pymetrics: Audit AI

URL: https://github.com/pymetrics/audit-ai

Developed by the Data Science team at Pymetrics, audit-AI is a tool to measure and mitigate the effects discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes.

According to the github overview about the project: “While identifying potential bias in training datasets and by consequence the machine learning algorithms trained on them is not sufficient to solve the problem of discrimination, in a world where more and more decisions are being automated by Artificial Intelligence, our ability to understand and identify the degree to which an algorithm is fair or biased is a step in the right direction.”

The library implements a number of bias testing and algorithm auditing techniques including:

Classification tasks

  • 4/5th, fisher, z-test, bayes factor, chi squared
  • sim_beta_ratio, classifier_posterior_probabilities

Regression tasks

  • anova
  • 4/5th, fisher, z-test, bayes factor, chi squared
  • group proportions at different thresholds

DataScience.com Labs: Skater

URL: https://www.datascience.com/resources/tools/skater

Skater is Python library designed to help explain how complex “black-box” models work.

According to the library overview on Github: “Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system often needed for real world use-cases. It is an open source python library designed to demystify the learned structures of a black box model both globally(inference on the basis of a complete data set) and locally(inference about an individual prediction).”

Google: What-If Tool

Link: https://pair-code.github.io/what-if-tool

The What-If Tool was released as a new feature of the open-source TensorBoard web application, which lets users analyse a machine learning model without writing code and offers an interactive visual interface for exploring model results.

In a post on the Google AI Blog, features of the What-If Tool include:

“visualizing your dataset automatically using Facets, the ability to manually edit examples from your dataset and see the effect of those changes, and automatic generation of partial dependence plots which show how the model’s predictions change as any single feature is changed.”

The post also explores two core features in more detail:

  • Counterfactuals — Allow you to compare a datapoint to the most similar point where your model predicts a different result.
  • Analysis of Performance and Algorithmic Fairness — Allow you to explore the effects of different classification thresholds, taking into account constraints such as different numerical fairness criteria.

IBM: AI Fairness 360 Open Source Toolkit

Link: http://aif360.mybluemix.net/

The AI Fairness 360 package includes a set of metrics for datasets and models to test for biases, explanations for those metrics, and algorithms to mitigate bias in datasets and models.

According to IBM: “This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Containing over 30 fairness metrics and 9 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education.”

Accenture: Teach & Test AI Framework

Link: https://www.accenture.com/us-en/insights/technology/testing-AI

Accenture’s “Teach and Test” AI Framework is designed to help companies build, monitor and measure reliable AI systems within their own infrastructure or in the cloud.

According to Accenture the “Teach and Test” methodology:

“ensures that AI systems are producing the right decisions in two phases. The “Teach” phase focuses on the choice of data, models and algorithms that are used to train machine learning. This phase experiments and statistically evaluates different models to select the best performing model to be deployed into production, while avoiding gender, ethnic and other biases, as well as ethical and compliance risks.”

In an article on the Accenture website, Kishore Durg, Growth and Strategy Lead for Accenture Technology Services talks about how using the Teach and Test framework to help create responsible AI system is similar to bringing up children:

“When you raise AI systems and just like kids, you need to teach it the right way. One of the things that we need to be worried about — lot of the AI systems right now have gender and ethnic biases. The corpus of data that is used to train them are managed by humans. When you actually use the same data to train these AI systems, you are going to perpetuate the biases that you have, into a system. Now this could be different in different parts of the world. In the Teach phase we try to neutralize these biases.. Just like kids make mistakes as they learn new things. And when kids make mistakes, we teach them how to do it. We also have the Test phase where we monitor for behaviours that are not ethically right, and we address it. So, it’s a very simple concept of Teach and Test. It’s just like bringing up your kids.”

Photo by Markus Spiske on Unsplash

Originally published at Technomancers — LegalTech Blog.

--

--