Reframing the Design of AI with feminist.ai

Bob Stark
Machine Learning and UX
7 min readSep 9, 2019

How to make AI systems for every body

AI systems are often created in isolation, in the proverbial garage without a connection to the outside world and different people. As a result, systems are created that only really work for the segment of the population that is similar to the makers. To address this gulf in your product and design for unheard voices, here are some insights from the feminist.ai community AI research and design group.

MLUX meetup with feminist.ai at Reddit — August 2019. Speakers: Christine Meinders, Shiveesh F

Explicitly consider the broader context of the technology

Recent approaches to considering the entire user experience improves our creation of technology compared to single perspectives like usability. However, it is still tied solely to the end user, and often to our assumptions about them. User research can be improved to learn more about the users, but is still often focused on needs stemming from their technical skills or work environment. For some technologies, these approaches may be sufficient, but for AI a new approach is needed to also include the broader context, including the cultural landscape, of the technology in our designs.

To address the perspective of the maker, the culture of the system’s use, and the greater cultural landscape of the system, Feminist.AI created and uses a framework they call the Cultural AI Toolkit, based off of their community philosophy. It maps the input of an imagined AI system, its internal processing, and the medium in which it exists to its imagined output.

Shiveesh explaining example of instagram influencer with other panelists in background

For example, an ML agent for the modern day social media influencer and their followers was presented and mapped to the toolkit.

  • For the input, questions were asked like: What can be considered a sensor?, Who created this data?, and Do I need to add to the data or is it enough by itself? The first idea was to use a voice user interface.
  • For the processing, or rules and behavior of the system, questions were asked like Does the relationship with data change on changing the algorithm?, What kind of interactions do different algorithms allow?, and How and why is the user represented in the selected algorithm? ImageNet, an image database linked to nouns that represent concepts, was chosen to link the spoken words to images.
  • For the medium or form in which the system resides, questions were asked like How does the user experience the AI agent?, Does the form of the agent has cultural relevance to the users?, and How abstract can the form be? Because of its convenience and its prevalence around the world and across cultural groups, the mobile phone was chosen.
  • Finally, for the imagined output of the system and the cultural landscape, questions were asked like How does this output fit into the users’ culture?, How does the user experience the output?, What is the perspective of the maker(s)?, and In what culture would this system live? To fit into the photo-sharing culture on social media, the output would be image posts on Instagram.

Overall, the resulting system is an agent that uses words from voice to find images to post to Instagram. The medium of mobile phones provides a sensor to produce input data from voice, it fits into the user’s culture, and it produces images shared on social media to fit that culture. A prototype was created and it and its initial data were contributed to the community so they can help create and improve the system with more realistic data. Using the Cultural AI Toolkit helped the maker see research and design in a new way by considering all of the important, broader contextual and cultural questions, thus humanizing it.

Feminist.ai panelists discussing questions on high chairs to an audience

Involve the community in the design

Considering the broader context of systems we make is a good start, but illustrated by the example above is that involving the affected community in the design itself gives people a stake in society and its technology when they feel like they don’t have one.

To start out, meet new people to get their perspectives, teaching them about what AI is and how they can make it. This makes the AI systems more understandable, giving them confidence and improving the conversations on social issues that result from them.

Next, involving them in the design improves the system because they contribute diverse ideas and realistic data. This also gives them a feeling of ownership in the resulting system.

For example, as part of a project called Contextual Normalcy, Feminist.AI aims to explore the shared meaning of feelings. The Diagnostic and Statistical Manual of Mental Disorders, or DSM, has historically defined what is “normal” and “abnormal” for clinical psychologists to diagnose their patients. However, that’s resulted in women being unhelpfully diagnosed as “hysterical,” gay being considered mentally ill until 1974, and gender non-conforming being considered abnormal until the 1990s!

Because different people feeling the same emotions is not the same, and feelings are intangible, it is difficult to formalize them so they can be interacted with without preconceived connotations of what it is or looks like. This ambitious project attempts to do that with the help of the Cultural AI Toolkit and the help of the community.

The input is from community sourced questions, responses, keywords asking people for all of the information on their feelings and what they do about it (e.g., jump up and down while happy). The rules and behavior of the system uses Word2vec to map the key words of the responses into a shared information space. The form or medium of the technology is two mobile phone apps (iOS, Android) and a web-based version that use augmented and virtual reality technologies to visualize the emotions and their place in the world where they were experienced. The output is blobby forms of multiple colors that change and morph over time based on user input.

Christine from feminist.ai explaining modular.ai with image of modular.ai cables and parts in background

Don’t be afraid to try new things!

Whether it’s thinking of new ways using data in a spreadsheet, or representing feelings as visual blobs, it’s important to not be afraid to try new things in the world of AI. As exemplified above, involving community members greatly improves the quality and accessibility of resulting systems, but it also allows people to come up with new ideas. Furthermore, life can get serious, of course, but the joy of collaborative play also allows the exploration of these ideas.

participant from feminist.ai workshop showing their system

For a third example, Feminist.AI did a project on Intelligent Protests. Some people can’t go to protests, but what if they could still participate? One solution is a new form of civic engagement called intelligent protests, which uses machine learning and XR — extended reality, including virtual reality, augmented reality, actual reality, and other mediums of experience — to allow people to protest remotely using their faces and voices.

feminist.ai project image
An electromagnetic microphone prototype

This project included 14 workshops where participants discussed ideas about data, models, and outputs/materials. The favorites were voted upon and then paper and hardware prototypes of various aspects of the design were created and tested, and even brought to an actual city council meeting.

screenshot from collective protest tool by feminist.ai
Users could engage in a collective protests about tree removal in Alhambra, CA using facial expressions

AI is complicated, and explaining it can help, but experimenting — or playing around with — various other forms and mediums makes it even simpler for people to engage with it. Creating and testing physical prototypes in a group is one way to do this.

Get involved!

If you can’t do this kind of participatory design or physical prototyping at your job, as many of us can’t, you can still get involved! You can contribute to the Contextual Normalcy project by specifying your feelings at different locations and times of the day. You can more generally get involved in organizations like feminist.ai, like participating on their subreddit r/feministai. Whatever form it takes, getting involved can inspire you in your job and allows you to inspire others, too. Even though future of AI may not be perfect, it’s still important to try to make it better!

Big thank you to our panelists and sponsors for sharing their expertise with us!

Christine Meinders: Music Faculty at CalArts and founder of Feminist.AI

Jana Thompson: Data Designer at Fjord

Shiveesh Fotedar: Design Technologist at Amazon

Michelle Gong: Graphic Designer at Google

Thank you to Reddit for sponsoring this event!

About the Machine Learning and User Experience (“MLUX”) Meetup

We’re excited about creating a future of human-centered smart products, and we believe the first step to doing this is to connect UX and Data Science/Machine Learning folks to get together and learn from each other at regular meetups, tech talks, panels, and events in the SF Bay Area.

Interested to learn more? Join our meetup, be the first in the know about our events by joining our mailing list (September 2019 newsletter), watch past events on our youtube channel, and follow us on twitter (@mluxsf) and Linkedin.

--

--