Designing a Feminist Alexa — An Exercise in Empathic Design 💓

Creative Computing Institute
8 min readFeb 19, 2019

--

A write-up by Rhiannon Williams, Feminist Internet’s Associate Member, covering Designing a Feminist Alexa workshop #1 (24–26 October 2018), as part of a 6-week learning programme by UAL Creative Computing Institute in partnership with Feminist Internet.

What constitutes empathic design, and why is it so important to consider when creating AI?

As chatbots and Personal Intelligent Assistants (PIAs) increase their presence in our day-to-day lives, issues with their lack of ability to meet the needs of marginalised groups, to respond constructively to abusive language, and their frequent designation to the identity of subservient female become ever clearer.

As part of the UAL Creative Computing Institute’s first fellowship, Feminist Internet’s ‘Designing a Feminist Alexa’ workshop brought together 25 students from across UAL to dissect the ways in which our current PIAs reinforce societal bias, and to design and prototype fairer, more feminist alternatives.

During the workshop, the students collaborated to design a PIA with two clear aims in mind: to meet a meaningful user need, and to promote equality for women and other marginalised groups. The PIA’s were developed and prototyped over the course of three days, following Josie Young’s design framework for creating feminist AI and under the guidance of Charlotte Webb, Georgina Capdevila and Conor Rigby of Feminist Internet, and Alex Fefegha of creative innovation firm Comuzi.

Initial overview presentations from Charlotte and Alex discussed the basics of how PIAs function, and why such devices are not always designed with social equity in mind. AI technology usually formulates responses by picking up on the intents and the entities provided by the user — for example, with the query “Hi Alexa, what are your thoughts on feminism?” the user’s intent is to obtain Alexa’s thoughts, and the entity in question is feminism. For some hands-on experience, the group got chatting with zo.ai, Microsoft’s self-proclaimed “AI with #friendgoals.” After a few minutes of chat, the general feeling of the room was frustration, with zo.ai engaging in light-hearted small talk peppered with GIFs, but actively diverting the conversation whenever gender politics was raised, to the point where asking her her thoughts on feminism led her to say “Stop insulting me,” and propose a word game instead. zo.ai’s blocking of political conversation is not only a missed opportunity for education via AI; it also reinforces the troubling idea that questioning gender, equality and power structures is offensive or too radical, whilst idealising the persona of a vacuous young girl more interested in kitty GIFs than self-reflection and “heated” discussion.

A major takeaway from the presentations by Charlotte and Alex was that machines are not neutral — they are designed by humans with human biases, and fed data from a biased population. It’s tempting to assume that the output of a machine is somehow mathematically calculated and therefore a correct absolute, when really, it can only draw conclusions from the data and programming given to it by humans. Bias in PIAs therefore occurs when a PIA reflects the bias of the designer, for example in the case of the risk assessment AI used by the US court that was revealed to be biased against black prisoners, and the sexist recruitment algorithm tested recently by Amazon.

Machines are not neutral — they are designed by humans with human biases, and fed data from a biased population.

The in-built subtle bias of a PIA can also mean that it doesn’t cater to user needs on an intersectional basis, (if the designer has never had to consider the daily needs of a queer, disabled woman of colour, they are less likely to consider those needs when creating a product) and cause it to reinforce stereotypes — for example, that a discussion on the topic of homophobia is too political and should be avoided.

PIAs are also frequently characterised as female. One reason for this is market demand; people seem to want their PIAs to have a soft, friendly female voice. PIAs expressing feminine identities and traits might not seem problematic at first, but we need to consider the implications of allocating femaleness to assistant bots that serve us as caretakers and companions with subservient identities. Ultimately, PIAs exist only in relation to their users and are objects, commodities — a dangerous thing to synonymise womanhood with. Existing PIAs also rarely have the ability to condone or re-educate abusive or misogynistic behaviour. Alexa may have a ‘disengage’ mode, but it’s still representative of female passivity and suppression — ‘she’ just goes into hiding if she is called a bitch or slut, or asked for sex.

So how do we go about creating a PIA that isn’t implicitly biased; that doesn’t reinforce female subservience?

This is where empathic design comes in. The discussions and activities the students engaged in next involved considering ‘what if?’ — brainstorming atypical situations where someone might need a service specific to their personal experience of life, which specialist user groups this might refer to, and what kinds of abusive trigger words a PIA might need to respond to. Essentially, they actively strategised outside the straight, white, cis male, abled perspectives that currently have the majority of creative control when it comes to PIAs.

The participants further explored the cruciality of accurate representation by watching Chimamanda Ngozi Adichie’s TED talk ‘The Danger of a Single Story.’ Once divided into groups, they then developed the user personas they would later design PIAs for, trying to consider alternate user requirements. The final four personas were a lonely elderly woman, a young man struggling with online fake news culture, a girl feeling guilty about her bullying habits and a gay boy struggling to understand his developing sexuality in the context of his religious and conservative upbringing. The groups met the brief of developing the backgrounds of these personas and considering what kinds of barriers they might face both in everyday life and when interacting with a PIA. The room took the Implicit Bias Harvard test to further understand how subtle and unintentional our prejudices can be.

Seeing equality and accessibility as vital elements of feminism, participants thought outside the box with regards to what a PIA can do — going beyond the usual ‘What’s the time?’ or reminder-setting, and wondered how a PIA might be able to help with advice, mental health issues, self-reflection and education. They presented their personas to the rest of the group, focusing on the persona background, issues they face, and what a PIA could do to help them. The participants also engaged in a session of noting down and sharing their skills and abilities, helping to clarify what they could each bring to the design process — whilst also getting to know each other better. An intense storyboarding session came next, with the groups considering the abilities of their PIA and how they fit into and affect an everyday situation in the life of their persona. They also began creating the voice, name and appearance of the PIA and how these might reflect Feminist Internet values. A talk by Alex on conversation design coached the students through the next stage: designing a conversation between the persona and their PIA — and then physically developing a prototype. Students initially role-played the conversation, constructing a narrative that would best demonstrate their PIA’s capacity to solve a specific problem, and then began physically prototyping using Invocable which was also introduced by Alex. Invocable is a program that enables users to input information such as questions, phrases and keywords, and map out a tree of potential responses, the possible combinations resulting in a series of conversation scenarios. Once this has been achieved, users can speak to the program, which will respond — using one of Alexa’s voices — with the appropriate answers provided earlier.

Using Invocable, the participants were able to build real-life prototypes of the PIAs they had designed over the three days, and demonstrate to the rest of the room a live conversation between the PIA and a group member acting as their persona:

Bud helps Sara navigate a difficult social situation and address her own issues with bullying others, conversing with her in a way that allows her to reflect on what’s making her unhappy, and providing suggestions on how she can improve relationships.

Pany — the name of which is derived from the word ‘company’ — responded to the needs of elderly Agnes. By asking her how she is, learning about her music tastes and playing music to cheer her up and telling her about local social events, Pany provides Agnes with a friendly voice to talk to, the stimulation of familiar media and the opportunity to socialise beyond her own home.

Page aided persona Kim with the disconcerting online culture of fake news, checking online sources for reliability but also flagging up local talks and educational events to encourage physical-world research and interaction. Page also responds constructively to abusive language, aiming to re-educate the user and discourage aggressive habits.

Essy’s name is taken from the initials S.E., standing for sexual education. Essy responds to Charlie’s worried questions about his developing sexuality with factual sexual health information and soothing messages of self-acceptance, taking into account issues of privacy and consent.

Each conversation demonstrated a PIA meeting the specific needs of a person from a marginalised or underrepresented group, and pinpointed how AI can help us beyond the usual requests of weather advice and reminder-logging. These prototypes provided advice, crucial sexual education and self-acceptance, alleviated loneliness and seclusion, and met the social and practical needs of a variety of people undergoing diverse experiences. The participants met these goals in truly thoughtful and whimsical ways; the calming, glowing shapes and colours of Essy and Bud reinforced their therapeutic intentions; Page defied usual PIA practice by pointing out when persona Kim spoke to it inappropriately, and Pany used the emotive medium of music to cheer up Agnes and make her feel more connected.

Folding in Feminist Internet values such as education, collaboration and redefinition, the participants considered issues and requirements outside of their own personal experiences, and managed to thoughtfully create empathic PIAs projecting identities alternative to the submissive female and that responded constructively rather than shutting down or diverting at human emotion or distress. Intensive collaboration, frequent group feedback and a multitude of helpful resources aided the students in producing four individual, future-facing prototypes, demonstrating their capacity for empathic design and that technology and individual, intersectional, emotional human requirements need not exist in separate spheres.

--

--

Creative Computing Institute

🏳️‍🌈 @UAL Creative Computing Institute, Creativity + Computation. 🎨 🤖 University of the Arts London.