Finding a Fair Way to Tame the Bigoted Bots

New research shows how to combat algorithms that replicate human bias

The Financial Times
Financial Times

--

Illustration: milmirko/Getty Images

By Anjana Ahuja

There is bigotry among the bots. Algorithms that are used to make life-changing decisions — rejecting job applicants, identifying prisoners likely to reoffend, even removing a child at risk of suspected abuse — have been found to replicate biases in the real world, most controversially along racial lines.

Now computer scientists believe they have a way to identify these flaws. The technique supposedly overcomes a Catch-22 at the heart of algorithmic bias: how to check, for example, that automated decision-making is fair to both black and white communities without the user explicitly disclosing their racial group.

It allows parties to encrypt and exchange enough data to discern useful information while keeping sensitive details hidden inside the computational to-ing and fro-ing. The work, led by Niki Kilbertus of the Max Planck Institute for Intelligent Systems in Tübingen, was presented this month at the International Conference on Machine Learning in Stockholm.

Imagine applying for a job with the fictional firm Tedium. Applicants submit their CVs online; an algorithm sorts them to decide who gets interviewed. Tedium executives worry that…

--

--