Is ChatGPT Ableist?

As an autistic person, I found this experience really alienating

Dr. Casey Lawrence
SYNERGY

--

Photo by KOBU Agency on Unsplash

Theoretically, Artificial Intelligence shouldn’t have prejudice. AI shouldn’t be racist, or misogynistic, or homophobic. However, AI — and what we’re calling AI right now, when we actually mean “language generation models” — doesn’t actually “think” for itself. Rather, it regurgitates the human-made material it was trained on, and if that material contains biases, it can replicate those, too.

Twice a month, I put together a list of AI-generated fiction writing prompts for a series I run called “AI Made Me Do It.” My first set of prompts for June was based on generating “red flags” for character flaws. It took a few tries to get ChatGPT to understand what I wanted; I’ve generated all sorts of prompts before, and some take more work than others.

ChatGPT really struggled with this assignment. Many of the “red flags” it suggested throughout the process were actually symptoms of mental illnesses, autism, or OCD, and some were even phrased in really hurtful ways. For example, ChaptGPT began to spit out prompts like these:

The character frequently engages in intense, obsessive hobbies that border on the eccentric, such as meticulously cataloging and organizing thousands of bottle caps or spending hours deciphering complex codes and…

--

--

Dr. Casey Lawrence
SYNERGY

Canadian author of three LGBT YA novels. PhD from Trinity College Dublin. Check out my lists for stories by genre/type.