Technology & Gender

Beneath the Gaze of the Machine

What Does a Woman Look Like?

OpenSexism
5 min readSep 2, 2022
Beneath the Gaze of the Machine How Are Women Faring? images of women’s faces and machines, generated by Stable Diffusion
“Beneath the Gaze of the Machine: What Does a Woman Look Like?” Image generated by Stable Diffusion

After years of futile searches, Andra Keay, founder of Women in Robotics, decided to count how many images Google returned for the query ‘woman building robot.” Not only did Google return more images of female robots than robot-builders, the search engine returned more images of children building robots, men building robots, and even just male robots.

“Sophia the robot, or the ScarJo bot, or a sexbot has a much greater impact on the internet than women doing real robotics,” Keay noted.

The dearth of images of women roboticists is just one example of occupational gender bias on digital platforms — a bias that perpetuates stereotypes that deter women from going into STEM careers, and also finds its way into machine learning systems, where it’s amplified.

That machine learning systems amplify gender bias is not surprising. In 2021, only 30-40% of the images in Google Open Images and ImageNet, two of the largest datasets, were of women. And the women in the images were more often scantily clad than the men were, and/or labeled in ways that did not align with professional accomplishments. For example, the share of images labelled ‘marriage’ in Google Open Images was 100 greater for women than men. Women, more than men, are defined in terms of their appearance and relationships.

Naturally, machines that pick up these patterns don’t ‘understand’ women as professionals, either. A 2020 study found that while image recognition systems labeled a man as ‘businessperson,’ a woman was more likely to be labeled with ‘smile,’ even if both qualities applied to the individuals.

Image generation systems also have trouble seeing women as professionals. When researchers fed an image-generating system a photo of a man’s head, the system completed him with a suit 43% of the time. Fed an image of a woman’s head, the system completed her with a low-cut shirt or bikini more than half the time. Simply filtering sexual content from the training data alone isn’t a fix. When OpenAI tried, they found that “DALL-E 2 generated fewer images of women in general.”

Just the other day, I used Stable Diffusion to generate a ‘photograph of a genius’ and nine out of nine of the generated images appeared to be men. I tried ‘photograph of a scientist’ several times just now, and the results all looked like this:

Photographs of male scientists generated by Stable Diffusion.
“Photograph of a [male] scientist” image generated by Stable Diffusion

Image search algorithms can also show very biased results, and it’s unclear to me how much of the bias is in the underlying data (a relatively small number of photographs of women business people, for example) and how much comes from the algorithm (the number of women business people that are simply not recognized or prioritized as such).

“A simple Google search for stock photos of “boss” yields results where the top 34 photos are images of 33 white men, two images depicting abusive behavior, and one very angry white woman — not a single image featuring BIPOC leaders can be found,” writes Sofya Polyakov for the Noun Project, a photo collection featuring women leading “at work, at home, in their communities, and beyond.”

Increasing the number of images of women is one response to the larger problem of how technologies see women — or fail to see them entirely. Visible Wiki Women, focused on Wikipedia, is another effort along these lines:

“We estimate that less than 20% of Wikipedia articles of important women have pictures. When women’s faces are missing from Wikipedia, that invisibility spreads.”

The invisibility does indeed spread, and adding photographs is important. An approach I see less frequently is NOT deploying an algorithm that amplifies the existing bias. Despite “myriad of papers on gender bias in NLP” researchers recently found that most of the newly developed algorithms do not even test their models for bias.

Gender bias is not limited to professional arenas. Looking at three billion web pages, researchers found that even words like ‘people’ or ‘person’ are associated with men (the same is true for faces in inanimate objects). When I generated images for ‘person,’ I was reminded of how I felt as one of a few, and sometimes the only, woman in the room. Only, here, the images are not generated in the context of a technology company; the context is unspecified — a general one, from which women are being erased — the world.

photographs of people, majority of whom appear to be men, generated by Stable Diffusion
photographs, generated by Stable Diffusion

Works Cited

Hao, Karen. “An AI saw a cropped photo of AOC. It autocompleted her wearing a bikini.” MIT Technology Review. (2021).

Keay, Andra. “A call for increased visual representation and diversity in robotics.” VentureBeat. (2021)

Kuta, Sarah. “Gender-Neutral Words Like ‘People’ and ‘Person’ Are Perceived as Male, Study Suggests.” Smithsonian Magazine. (2022).

Samuel, Sigal. “A new AI draws delightful and not-so-delightful images.” Vox. (2022).

Schwemmer, Carsten, Carly Knight, Emily D. Bello-Pardo, Stan Oklobdzija, Martijn Schoonvelde, and Jeffrey W. Lockhart. “Diagnosing gender bias in image recognition systems.” Socius 6 (2020): 2378023120967171

Singh, Vivek K., Mary Chayko, Raj Inamdar, and Diana Floegel. “Female librarians and male computer programmers? Gender bias in occupational images on digital media platforms.” Journal of the Association for Information Science and Technology 71, no. 11 (2020): 1281–1294.

Stanczak, Karolina, and Isabelle Augenstein. “A survey on gender bias in natural language processing.” arXiv preprint arXiv:2112.14168 (2021).

Temming, Maria. “Americans tend to assume imaginary faces are male.” Science News. (2022)

Zhao, Jieyu, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. “Men also like shopping: Reducing gender bias amplification using corpus-level constraints.” arXiv preprint arXiv:1707.09457 (2017).

--

--