Q&A: Robin Brewer on Machine Learning & Disabilities

People + AI Research @ Google
People + AI Research
7 min readJan 26, 2022
Hand-drawn illustrated portrait of Robin Brewer, a woman of color wearing a red scarf
Illustration for Google by Georgia Webber

Opinions in PAIR Q&As are those of the interviewees, and not necessarily those of Google. In the spirit of participatory ML research, we seek to share a variety of points of view on the topic.

Note: Individual people may choose either people-first or their preferred identity-first labels for describing disability. In this post, people-first language is used as the discussion is about people in general, not individuals.

Robin Brewer is an assistant professor in the School of Information at the University Michigan where she uses human-centered approaches to research machine learning’s susceptibility to ignoring the needs and interests of disabled people and older adults.

Robin talked with PAIR’s former writer-in-residence David Weinberger. This Q&A has been collaboratively edited with her.

How is machine learning doing, in terms of helping to improve user experiences for people with disabilities? Getting better? Getting worse? Still not paying enough attention?

Robin: There are some things that the ML community is starting to do right, thanks to its inclusion of more interdisciplinary perspectives, including disability studies and human-computer interaction scholars. Some of the challenges about bias and representation are starting to include disabled people and older adults. That’s good, because ML hasn’t worked well for these communities.

For example?

Robin: Many facial recognition models have been biased against people with atypical facial expressions; the models misread them. There are also models designed to identify pedestrians, but that fail at detecting people using wheelchairs. Or, language models that get tripped up by atypical speech patterns.

But, it’s starting to improve. For example, at Google, Project Euphonia is studying how to improve speech recognition for atypical speech and Project Activate is researching ways to leverage different facial expressions to help people navigate their phones . HireVue has modified the video-based component of their job applicant assessment software because it turned out that people who engage with different social cues — they might not make eye contact or they move their hands repetitively — could be perceived by the ML model as uninterested or not paying attention.

Social cues like those must also vary by social and cultural groups.

Robin: Yes. A lot of the challenges related to disabilities or age are similar to the challenges with machine learning used in various cultures or even different groups within a culture. If we can address this issue for one type of group, it can help mitigate that problem with other groups.

This seems to bring ML’s treatment of disabilities under the broader umbrella of issues of bias and fairness.

Robin: Disability justice is one of the tenets of a movement that centers disabled people as people with full agency, competent to make decisions, to serve as leaders, and the like. Disability justice is not focused on machine learning, but it’s interesting to think about in that context because it means we don’t turn solely to researchers and developers to solve ML’s problems with treating disabled people fairly. Rather it means we as researchers and developers working on ML apps involve and engage with disabled people so they can tell us how they want ML to function for them. This is a type of participatory design and research that increasingly is being called for by the PAIR team and across Google’s Responsible AI and Human-Centered Technology organization, among others. It means designing not just for a group but with a group, reflecting the phrase “nothing about us without us” from the disability community.

And for people who are members of multiple groups simultaneously.

Robin: Yes, one of the tenets of disability justice is intersectionality. Black, brown, or queer disabled people likely experience AI systems differently than other groups. How can we start to think about systemic inequity that affects AI? I think the best way to consider intersectionality in models is to include historically minoritized groups to have input in design and auditing how systems work over time.

So, where should we begin?

Robin: A lot of the conversations recently have been about how bias happened. Often we think there was something wrong with the data the model was trained on. That’s often the case with ML biases against disabled people. Disabled people are very often under-represented in the training data. So, if we have more data about disabled people, the assumption is there will be less bias, which is partially true. The same is true for biases against older people. Or younger people, for that matter. If we don’t have enough data or data that’s representative enough, there are going to be some mishaps and biases because ML is about understanding patterns found in data.

However, there are also problems in how we get data. Right now, researchers often go to the Internet to scrape data and readily available data fails to be representative.

What would count as “diverse” or “representative” in these cases?

Robin: That’s debatable. It could mean census-representative, but that would prioritize representation in a particular location. It could mean having a range of disabilities, but how disability is experienced can manifest very differently even within one type of disability. For example, a visual disability can mean low vision, blindness with light perception, blindness with no light perception, tunnel vision, and more. However, what’s clear is that disability and age have to be included in data for researchers to make a decision on how to use that data.

But this has to be hard for machine learning, doesn’t it? Pretty much by definition, social cues are fairly standard in a population, which is why they work as cues. But within the population of disabled people, the cues can vary quite a bit: People with Parkinson’s, or autistic people, or people who have limited vision may not meet eye contact norms in different ways and for different reasons.

Robin: Discussing norms and disability is complicated because disability can manifest itself in so many different ways. As such, how we represent disability can vary. We could focus on increasing representation and labeling people in datasets, but there are also very real privacy concerns about sharing certain types of data. For example, if I am the only autistic person who is a Latina woman in my zip code, I would be easily identifiable. Those concerns are especially strong if you’re in the minority of a population, and especially if you’re in a small minority.

So with such a diversity of signals in a small part of the population, how can machine learning learn what it needs in order to avoid unacceptable bias?

Robin: This is a task that specific types of machine learning such as deep learning could help with. Rather than us telling it what buckets we want a model to sort things into, as we typically do with machine learning, deep learning could think of people less as fitting in a box but as on a scale.

That seems to align nicely with a better societal approach to people with disabilities.

Robin: Correct. It recognizes that disability is diverse and that we should treat it as such. Similar strategies have been posed when creating personas or fictional stories about participants in research, where we move away from a single persona to persona spectrums to represent permanent, situational, and temporary disabilities. Increasing awareness about disability being on a spectrum would also affect how we collect and label data about disability. The same goes for age. In our recent paper published at AIES, we discuss how using large categories to represent age ranges such as 50+ or 65+ is not useful because someone who is 50 or even 65 is very different from someone who is 90.

What sort of things have you heard disabled people ask for in these conversations?

Robin: The biggest ask of disabled people is around agency. How can they have better control over how their data is used, or better control over how the AI application is used? How can we center disability in these considerations?

Could you share an example?

Robin: Some of my research has been about agency with regard to autonomous vehicles [AVs]. Being able to travel is a big marker of independence. The problem with transportation at the moment is you have to be able to see to operate a vehicle. So, how could an AV be a way for blind people, or people with low vision, to get from point A to point B? The auto manufacturers think they should create fully automated systems that people have no control over, rather than asking the rider what to do. But, when I spoke to blind and low vision people, they said they wanted to have control over their environment and to be in control of the vehicle in non-visual ways, such as through speech or touch. The AV manufacturers need to try to understand how the disabled want to have agency.

Overall, what do you think machine learning developers can learn from the very diverse community of people with disabilities?

Robin: Developers can learn how to challenge their assumptions and expectations about disability communities in ways that will improve how products are designed and used long-term. Disability is different from other representations of people because it is a spectrum. Disability cannot be categorized into neat buckets. For example, someone can have multiple disabilities and each may not be formally diagnosed. Disability can also change over time, for example with progressive disabilities. To try to make a category for each type of disability is futile. We have to learn how to represent people in more holistic ways.

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.