Technology: Machine Learning

What Does Artificial Intelligence Look Like?

Hint: There’s More to It than a Floating Blue Brain

OpenSexism
3 min readAug 25, 2022
A banana, a plant and a flask on a monochrome surface, each one surrounded by a thin white frame with letters attached that spell the name of the objects
Max Gruber / Better Images of AI / Banana / Plant / Flask / CC-BY 4.0

For the past few weeks, I’ve used DALLE-mini to generate images for the top of my Medium pieces. I enjoy using this tool, but if you look at the images below — generated from prompts that include the words ‘algorithmic’ and ‘machine learning,’ you’ll see that they share common themes — blue palettes, floating brains, and illegible screen text.

The images DALLE-Mini came up with are not unusual. A search for ‘algorithm’ or ‘artificial intelligence’ on Pixabay also returns disembodied brains and abstract blue textual compositions. What these representations of machine learning systems omit is us: the humans who build the technology, and the ones who are affected by it. There is nothing otherworldly about the human decisions behind how data is collected and labeled or which problems machines are attempting to solve.

A woman and a man sitting in front of a computer screen, pointing at something on the screen and talking, with a colourful stncil design on the wall behind them.
Nacho Kamenov & Humans in the Loop / Better Images of AI / Data annotators discussing the correct labeling of a dataset / CC-BY 4.0

The images we see affect our perceptions. A recent study of children’s books, for example, found that books that feature female characters solving math problems help break the stereotypes that drive women away from STEM (as well as the less optimistic finding, that stories of math with male stereotypes significantly increased math-gender stereotypes). Last year, the New York Times reported on how women are stereotypically represented and/or underreprested as leaders in marketing images. “Think about the collective impact that can have when the same things are being said over and over again,” Jane Cunningham says. Viewed through this same lens, the repeated use of disembodied digital images to represent ‘artificial intelligence’ perpetuates the false narrative that humans aren’t involved.

Citing research by Alberto Romele, which argues that AI ethics has neglected to address issues related to science communication and how technology is visualized, among other studies and projects that speak directly to the issue of how technology is represented, non-profit Better Images of AI, has created a gallery of illustrations that are free to use and designed to be both informative and accessable. I use two of them in this piece.

For now, however, abstract blue representations of AI are so common that even our AIs have learned from us to generate images with these qualities.

Image generating systems have gender biases too! Learn more:

Works cited

Altman, Mara. “Yes, Marketing Is Still Sexist.” New York Times. (2021).

Block, Katharina, Antonya Marie Gonzalez, Clement JX Choi, Zoey C. Wong, Toni Schmader, and Andrew Scott Baron. “Exposure to stereotype-relevant stories shapes children’s implicit gender stereotypes.” PloS one 17, no. 8 (2022): e0271396.

Romele, Alberto. “Images of Artificial Intelligence: a Blind Spot in AI Ethics.” Philosophy & Technology 35, no. 1 (2022): 1–19.

--

--