Can You Make an A.I. That Isn’t Ableist?

IBM researcher Shari Trewin on why bias against disability is much harder to squash than discrimination based on gender or race

MIT Technology Review
MIT Technology Review

--

Photo: Doug Maloney/Unsplash

By Karen Hao

A.I. has a well-known bias problem, particularly when it comes to race and gender. You may have seen some of the headlines: facial recognition systems that fail to recognize black women, or automated recruiting tools that pass over female candidates.

But while researchers have tried hard to address some of the most egregious issues, there’s one group of people they have overlooked: those with disabilities. Take self-driving cars. Their algorithms rely on training data to learn what pedestrians look like so the vehicles won’t run them over. If the training data doesn’t include people in wheelchairs, the technology could put those people in life-threatening danger.

For Shari Trewin, a researcher on IBM’s accessibility leadership team, this is unacceptable. As part of a new initiative, she is now exploring new design processes and technical methods to mitigate machine bias against people with disabilities. She talked to us about some of the challenges — as well as some possible solutions.

--

--

MIT Technology Review
MIT Technology Review

Reporting on important technologies and innovators since 1899