User Research for Responsible Machine Learning

Yomna Elsayed, PhD.
4 min readJan 14, 2023

--

A depiction of two people talking about algorithms generated by DALL-E 2 from Open AI

With the increasing adoption of ML and AI algorithms into social media and tech in general, user researchers, especially ones in Responsible AI teams, are faced with a challenge: how do we conduct user research on algorithms? In this article, I will enumerate some of the techniques I have experimented with to conduct user research around algorithms in general and responsible machine learning in specific.

Algorithms, let alone machine learning algorithms, are by design created to do the leg work: the computations, the complex logic or predictions in case of Machine Learning. This all happens in the background, with the least user intervention, such that when there’s an algorithmic outcome, it almost appears like magic, augmenting our human capabilities and sometimes doing away with them altogether.

But algorithms impact humans, and unfortunately due to biased data training or engineering, they do not impact them equally. Therefore, it is important to view algorithms as socio-technical features that are themselves deserving of users’ input and design attention just as any front-facing feature, in what can be described as users’ algorithmic experience.

However, one of the most challenging aspects of conducting user research for ML, is how to ask users about something that they are not quite aware of, something that is designed to be seamless and non-observable: the algorithm. How can we study algorithmic experience when the interactions between users and algorithms are not necessarily conscious encounters?

Enter, algorithmic experience, a concept coined by Shin et al in the field of human computer interaction, which seeks to understand how users’ perceive, understand as well as interact with algorithms and the ways those interactions can be improved using ethical and user-centered design principles.

As part of my work with the Machine Learning Ethics team, we dealt with some of the most challenging problems: getting users’ perspectives on ML algorithms, and their performance as it relates to concepts of responsible machine learning, such as bias, fairness, and transparency. Algorithmic experience methods enabled us to bring the algorithms into users’ awareness for critical examination and scrutiny. Here are some tips for talking to users about algorithms:

Use app-walk throughs

Ask the users to open their apps and discuss what and why they are seeing what they are seeing. If it is a social media timeline, ask them about the first few posts they are seeing, the order, and coverage of these posts. If anything, this method combined with in-depth interviews helps bring consciousness to users’ encounters with algorithms that they might normally take for granted thus enabling users to show, not just tell, researchers their algorithmic experiences. It therefore, grounds the discussion in concrete examples, and brings the algorithm into the forefront through critical examination of its performance on key ethical metrics.

Encourage users to think out-loud

If you have a specific task or encounter in mind, encourage participants to speak aloud any words in their minds as they complete a task to understand their thought processes. This is usually combined with another method such as retrospective questioning.

Ask indirect questions

Do not talk about algorithms, rather ask them about their experience and let the words come naturally to participants as they are describing their encounters. Asking more “indirect, open-ended questions about why participants see what they seeon their personalized media, can help overcome gaps in algorithmic vocabulary and encourage users to talk more freely about their experience.

Move between abstract and specific understandings

This may apply more to generative research around responsible machine learning. For example, when conducting research on “fairness”, a very abstract and elusive concept, we asked users to provide us with their general understanding of fairness and then asked them to apply that understanding to their timelines. This allowed the users to not only think about the algorithm in terms of their individual consumption, but gave them a chance to interrogate its social context as well.

Buy Why? You must be asking? Especially our data gurus? Why can’t we just observe users’ behavior around algorithms using A/B tests and draw conclusions from there?

I have generally observed through my work on algorithmic interaction, that while users cannot quite articulate what an algorithm is and how it works, they can very much feel and describe the effects it is having on their experience, even describing how it is impacting them socially, psychologically or economically. While an A/B test can tell us which algorithmic version performed better, it cannot tell us the why and how. Knowing this should encourage social media companies to not rely on users’ lack of awareness as a reason to delay the delivery of responsible products. This is neither ethical nor sustainable with the continued growth of awareness around algorithms and the continued “felt” impact they are having on people’s lives.

Instead, companies with the help of user researchers, should center users in the design and development of ML models, because they happen to be the ones who stand to pay the highest price when AI goes wrong.

[1] Light, B., Burgess, J., & Duguay, S. (2018). The walkthrough method: An approach to the study of apps. New Media & Society, 20(3), 881–900. https://doi.org/10.1177/1461444816675438

[3] Schneiderman, B., & Plaisant, C. (2005). Designing the user interface: Strategies for Effective Human-Computer Interactions. United States: Pearson Education.

[2] Shin, D. D. (2023). Algorithms, Humans, and Interactions: How Do Algorithms Interact with People? Designing Meaningful AI Experiences. United States: CRC Press.

[2] Swart, J. (2021). Experiencing algorithms: How young people understand, feel about, and engage with algorithmic news selection on social media. Social media+ society, 7(2), 20563051211008828.

--

--

Yomna Elsayed, PhD.

User Experience Researcher in Responsible Machine Learning