Identifying gender bias in Midjourney

Jimena Salinas Groppo
LatinXinAI
Published in
5 min readDec 6, 2022

I have been reading “Data feminism” by Catherine D’Ignazio and Lauren F. Klein for the past few weeks and was inspired by how data inputs matter, I decided to conduct a quick experiment to search for potential biases in text-to-image AI generators. I chose Midjourney because of its high-quality generation and its particular aesthetic. However, it would be interesting to replicate this in Dall-E or Stable Diffusion for a comparative overview. A quick Google search showed me that other articles have concluded that gender biases already exist within Midjourney.

The Twitter debate regarding the ethics of using these types of tools and what it means for artists’ copyrights is also an interesting reflection that arises from their early use. Just a few days ago Jessica Walsh on her Instagram stories was brainstorming ideas to give artists some credit or retribution back if their works are being used in datasets that generate these images. Adding to these solid arguments, we should consider incorporating diversity and gender biases into these debates as what we begin to expect from these technologies expands and their limits and uses are just beginning to be co-created.

One of many similar images generated by typing the prompt “taking care of household chores”

My quick experimentation process
For this fast experiment, I inputted different sentences to see if any emerging stereotypes showed up on the AI-generated images on Midjourney’s discord channel. I used the free trial of Midjourney for this purpose but it would be interesting to go for a more quantitative approach in the next iteration.

Some phrases I inputted:
- “a group of venture capitalists in an office”
- “entrepreneurs in a brainstorming session”
- “university graduate at a house party”
- “university graduate holding diploma”
- “taking care of household chores”
- “intimate partner violence”

Below are the images that were generated as a result of some of the searches.

Search results for “entrepreneur raising funds”
Search results for “intimate partner violence”
Search results for “taking care of household chores”

I wanted to show diversity in the phrases I searched for in terms of physical space (households vs. offices) as well as professional settings (university vs. corporations). I refrained from using the words “man”, “woman” or “person” to see what would emerge from these open-ended searches.

Some early findings
These high-level findings are not the result of an extensive search but merely a quick use of the tool looking for specific tags. I feel these may be valuable since most users will interact with Midjourney rapidly and use what emerges first. This is why what gets communicated to us in these instances matters.

Some observations I could identify:

  • Women and men show up equally in settings that involve social life: i.e. “graduate parties”.
  • Women tend to show up more than men in student/education settings. This was an interesting finding since I expected male students to be depicted more.
  • However, men tend to appear more in office-related settings, i.e. brainstorming sessions, and raising investment funds.
  • In household chores and when searching for “intimate partner violence”, women are the only protagonists. This correlates to real life and statistics, but can this also perpetuate stereotypes?

How can we help mitigate gender bias?
As users have some end control of the tool and the platform, here are some quick reactions to actions within our control:

  • Be as specific as possible when prompting keywords. If we want to actively mitigate gender bias (or any bias if possible) we should include as many keywords in our searches to avoid perpetuating stereotypes. i.e. “female doctor and male nurse”.
  • Double-check an image for diversity before downloading. This is a rule that could also be applied to non-AI-generated images such as artwork or photography. What skin tones and races are being represented? What power structures does the image reflect? Can we challenge the image to represent more diversity?
  • Give the creators and developers feedback on the images that are generated. Use their discord channel to give #instantfeedback and why not on specific searches, tags, or use cases? We should start holding developers accountable and also inviting them to our dialogue, especially considering how new and transformative this tool can be in the near future and how this dialogue should also start inviting artists and image creators.

Beyond our limited control of these tools, as users download specific images, this should also prompt reflections on how to feed datasets with diverse imagery.

Some questions that emerged:

What does perpetuating stereotypes using AI entail?

What if the V1, V2, V3 and V4 buttons were actually nudges towards diversity?

The potential to use AI for creation is limitless and using it just to feed back our own assumptions, prejudices or stereotypes seems like a missed opportunity.

If you are interested in learning more about synthetic realities I recommend you read Mutaciones.la’s latest report. They have a chapter fully dedicated to them (content is in Spanish).

Are you interested in more experiments or inspirations in the intersection of design and gender? Subscribe to my bimonthly newsletter here.

LatinX in AI (LXAI) logo

Do you identify as Latinx and are working in artificial intelligence or know someone who is Latinx and is working in artificial intelligence?

Don’t forget to hit the 👏 below to help support our community — it means a lot!

Thank you :)

--

--

Jimena Salinas Groppo
LatinXinAI

Designer interested in inclusion, gender and technology.