Will AI Methods Treat People with Disabilities Fairly?

MIT-IBM Watson AI Lab
4 min readNov 2, 2018

--

IBM Accessibility Research has recently been exploring the topic of fair treatment for people with disabilities in AI systems. Such systems must be carefully designed to avoid discrimination against marginalized groups. Are people with disabilities at risk of being disenfranchised by AI systems, and how can we change this?

Attendees at the ‘AI Fairness for People with Disabilities’ workshop

To explore these questions, IBM Accessibility Research hosted a workshop on ‘AI Fairness for People With Disabilities’ earlier this month, as part of AI Research Week hosted by the MIT-IBM Watson AI Lab. Our workshop convened a diverse group of people with disabilities, representatives of advocacy organizations, AI specialists, and accessibility researchers and practitioners from industry, government, and academia for a day of thought-provoking presentations and conversations. After a welcome from Ruoyi Zhou, Director of IBM Accessibility Research, we began the day with a review of the future of work and AI’s place in that future from IBM’s Chief Economist and Chief Analytics Officer, Martin Fleming. Next we heard personal perspectives on fairness from five thought-leaders with different disabilities. Jutta Treviranus from OCAD University described the relationship between machine learning solutions and inclusive design, encouraging a focus on outlier individuals and those at the fringes. For AI projects, and indeed all projects developing systems intended to make decisions affecting human lives, it is critical that a broad range of user stakeholders are involved in development, including people with disabilities who can help developers to think through the possible implications of the technology and to test the technology’s performance on edge cases.

Martin Fleming speaks during the ‘AI Fairness for People with Disabilities’ workshop

Workshop attendees discussed fairness in employment, education, healthcare, financial services, public safety, and multimedia analytics. For example, today’s text-to-speech systems have difficulty understanding the speech of individuals who are deaf or hard of hearing; drivers may find their insurance quotes are10 times higher if they reveal a disability; and people who use assistive technologies may be systematically excluded in automated job candidate screenings if timed responses are used to infer expertise. Without careful design of AI systems, people with disabilities could clearly be disadvantaged.

One of the known sources of bias can be lack of representation in datasets. The World Health Organization estimates that 1 billion people have some form of disability, worldwide. If datasets include adequate representation from this population, the resulting models are more likely to be effective for them.

When disability information is available in a dataset, data bias can also be tackled with numerical methods for weighting data, or the output of models can be adjusted to mitigate bias. IBM’s AI Fairness 360 Toolkit offers a range of different fairness measures and techniques to identify and mitigate bias.

However, while representation in datasets is important, disability poses other challenges that make it different from other protected attributes like gender, age, or race. This is because of the diverse, nuanced, and dynamic nature of disability itself. Disability is a mismatch between the available infrastructure and the needs of an individual. Persons using wheelchairs can get around just fine in a wheelchair-accessible environment; disability only arises when they are faced with a flight of stairs between them and their destination. Impairments and health conditions that can lead to disabilities are not only diverse, but vary in intensity and impact, and often change over time. This diversity also applies within groups that might at first seem homogeneous. A popular saying in the autism community is: “When you’ve met one person with autism, you’ve met one person with autism.” Rather than forming a cohesive group, the disabled community includes many outliers. This poses a challenge for machine learning, which works by finding patterns and forming groups. There may not be enough individuals with a given type and severity of disability in a dataset for the machine to identify a pattern. As a result, predictions for an individual may be poor quality, or unfairly negative.

Participants in the ‘AI Fairness for People with Disabilities’ workshop, deep in conversation

By the end of the day, we had identified several important use cases for fairness in this space, and the importance of including a disability perspective in AI development. Attendees are now planning for a number of follow-on activities exploring data availability and protection, techniques for better handling of individuals whose disability makes them data outliers. If you’re interested in getting involved in our AI fairness initiatives, please contact Shari Trewin.

Authored by Shari Trewin, Accessibility Research Leadership, IBM Research (trewin@us.ibm.com)

--

--

MIT-IBM Watson AI Lab
MIT-IBM Watson AI Lab

Written by MIT-IBM Watson AI Lab

This is the official Medium account of the MIT-IBM Watson AI Lab. The account follows the IBM Social Computing Guidelines. @MITIBMLab

No responses yet