This Is How A.I. Bias Really Happens — and Why It’s So Hard to Fix

Bias can creep in at many stages of the deep-learning process, and the standard practices in computer science aren’t designed to detect it

MIT Technology Review
MIT Technology Review

--

Illustration: Ms. Tech; photo: Pixologicstudio/Science Photo Library/Getty

By Karen Hao

Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.

But it’s not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place.

How A.I. Bias Happens

We often shorthand our explanation of A.I. bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process. For the purposes of this discussion, we’ll focus on three key stages.

1. Framing the Problem

--

--

MIT Technology Review
MIT Technology Review

Reporting on important technologies and innovators since 1899