Image source: https://blog.prototypr.io/

Beyond Bias: Contextualizing “Ethical AI” Within the History of Exploitation and Innovation in Medical Research

Chelsea Barabas
MIT MEDIA LAB
Published in
7 min readDec 20, 2019

--

On December 14, 2019 I gave an invited talk at the “Fair ML for Health” workshop at NeurIPS. Below is a write-up of that talk. You can watch the talk here: https://slideslive.com/38922104/fair-ml-in-healthcare-3?t=3848s

It’s time for us to move beyond “bias” as the anchor point for our efforts to build ethical and fair algorithms.

A couple of weeks ago a world-renowned behavioral economist named Sendhil Mullainathan wrote an op-ed in the New York Times, arguing that algorithms offer us the unique opportunity to both efficiently surface and correct for biases that are deeply ingrained in our society. Mullainathan has built his career on conducting “bias audits” like this blockbuster study from 2003, which revealed significant racial biases in the U.S. job market. The author argued that today’s data-rich world enables us to carry out similar studies much more efficiently than we ever could have dreamed two decades ago. He compared his early studies to his more recent experience auditing a medical triage algorithm for bias, whereby he and his colleagues were able to both identify bias within the algorithm and then significantly reduce that bias by removing a problematic variable from the model. This example, he argued, illustrated the…

--

--

Chelsea Barabas
MIT MEDIA LAB

Curator at Edgelands Insitute, Steering Committee NOTICE Coalition