No, large language models aren’t like disabled people (and it’s problematic to argue that they are)

Emily M. Bender
16 min readJan 21, 2022

--

tl;dr

There’s a tendency I’ve observed where people trying to argue that language models “understand” language to draw analogies to the experience of disabled people (especially blind and Deafblind people). These analogies are both false and dehumanizing. A recent blog post by a Google VP provides a particularly elaborated version of this. In this post, I lay out why and how it is problematic with the goal of helping readers to spot & resist such harmful and flawed argumentation.

Introduction

In a blog post last December, Blaise Agüera y Arcas asks “Do large language models understand us?” The post as a whole reads as a puff piece for Google’s LaMDA system (also demoed at GoogleIO 2021), a chatbot built on a very large language model (and a lot of training data), fine-tuned to provide “sensible” and “specific” responses in multi-turn conversations. To my knowledge, there are (as yet*) no academic publications on LaMDA, nor is the model available to people outside Google to inspect and test. From what little information is available, it appears to be fundamentally a language model in the sense that its primary training data consist only of text (with no signal as to the meaning of the text). Per Agüera y Arcas’s blog post, the secondary (fine-tuning) training data are ratings by humans of how “sensible” and “specific” system responses are.

Agüera y Arcas’s blog post interleaves gee-whiz examples of interactions with the system (cherry-picked? who knows) with philosophical musings about whether large language models (LLMs) can be said to share various properties with humans, whether such questions are even answerable, and whether they should instead be viewed in terms of how humans relate to LLMs. Agüera y Arcas asserts that “for many people, neural nets running on computers are likely to cross this threshold [from ‘it’ to ‘who’] in the very near future.”

He says “My goal here isn’t to try to defend an ultimate position with respect to these imponderables,” and though it’s not clear if he includes the question in the title among the “imponderables”, he proceeds to insist on the answer that LLMs do understand language, with argumentation that boils down to:

  • an unwarranted shifting of the burden of proof (e.g. “it’s unclear how we can meaningfully test for the ‘realness’ of feelings in another, especially in a being with a fundamentally different ‘neurophysiology’ from ours” and “Since the interior state of another being can only be understood through interaction, no objective answer is possible to the question of when an ‘it’ becomes a ‘who’” [emphasis added]);
  • presupposing (and never establishing) his conclusion (e.g. “Large language models illustrate for the first time the way language understanding and intelligence can be dissociated from”);
  • unwarranted assumption of analogy between so-called “neural nets” and human brains; and
  • dehumanization of people who have experienced various forms of oppression, including especially enslaved people and disabled people.

There is a lot that can be critiqued in this post, but I won’t try to do it all here.

I definitely have a professional interest in the question in Agüera y Arcas’s title, and have published on it (with Alexander Koller) at ACL 2020 (“Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data”).

Regarding the unwarranted assumption of analogy, I refer the reader to “The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor” by neuroscientists Alexis T. Baria and Keith Cross. Among other things, they point out that this metaphor “afford[s] the human mind less complexity than is owed, and the computer more wisdom than is due” (p.2).

Regarding the ways in which Agüera y Arcas’s argument is dehumanizing towards enslaved people, Meg Mitchell has articulated the main points wonderfully clearly on Twitter:

Here, I’m going to focus in particular on the problematic ways in which Agüera y Arcas adduces the lived experience of actual and hypothetical disabled people.

(Mitchell also touches on this in her tweet thread.)

Author positionality & disclaimers

I want to say upfront that I do not identify as a disabled person — to the extent that I have disabilities (e.g. myopia & hyperopia) I am provided with fully normalized accommodations and do not experience barriers. Furthermore, this is a blog post and not a fully researched piece of academic writing. I am not sufficiently familiar with the disability studies literature (though I know it exists) and thus am not drawing connections to that literature as should be done in a proper academic response here. I am also trying to avoid speculating about the lived experience of people with disabilities I do not share, but if I fail in that and get things wrong, I welcome corrections.

What Agüera y Arcas gets right

Agüera y Arcas makes some observations that I think are (likely) well supported by the literature and furthermore key to understanding how human cognition, human language learning, and human communication work. Among other things, he says such things as:

“Furthermore, much of what we consider intelligence is inherently dialogic, hence social; it requires a theory of mind.”

“mutual modeling is so central to dialog, and indeed to any kind of real relationship”

“Our trick might be that we learn from other people who are actively teaching us (hence, modeling us)”

“This socially learned aspect of perception is likely more powerful than many of us realize; shorn of language, our experiences of many sensory percepts would be far less rich and distinct.”

“the inherently social and relational nature of any kind of storytelling”

“It’s obvious that our species is primed to [ascribe personhood to machines] from the way so many children have freely projected personhood onto stuffies, or even favorite blankets, long before such artifacts were capable of talking back.”

Yes, humans are highly social. Yes, our experience and learning is not just embodied but also socially situated. For example, Clark (1996; Using Language) describes it as joint activity, where all participants are mutually aware of the activity, of each other’s roles in it, and of each other’s awareness. Baldwin (1995) (p.132) delightfully evokes “intersubjective awareness” as follows:

“And, of course, it is just this aspect of the joint attention experience — intersubjective awareness — that makes simultaneous engagement with some third party of such social value to us. It is because we are aware of simultaneous engagement that we can use it as a springboard for communicative exchange.”

Rather than rehashing here what the child language acquisition literature has to say about how this ties in to human language learning, especially first language learning, I’ll refer the reader to Section 6 of Bender & Koller 2020 and the references we cite there.

Once we’ve learned a language, we can use it, together with a kind of projected intersubjectivity, to communicate with (and make sense of communication from) other humans who are remote in space and/or time. When we encounter synthetic language, we use this same capability to create a model of mind even if there is no mind behind it. (That doesn’t mean that there’s a mind there — but I’m getting ahead of myself.)

In other words, Agüera y Arcas is also right that we are primed to ascribe emotions, motivations, and minds to objects we know to have none of these. The experimental work of Heider & Simmel (1944) shows that their participants were willing to attribute personality characteristics to shapes and construct a narrative based only on movements in a short animated film:

Alt text: This is a stop-motion animated video with no sound. There is a white background against which there are four shapes: a large open box, a solid circle and two solid triangles. The motions of the circle and two triangles appear to reflect emotions and intents, though they are obviously just shapes. Here is a description of the first 15 seconds of the video: At the start of video, there is one triangle inside the box and part of the box is open. The triangle moves to that open spot and the box closes. Then two more shapes (a smaller triangle and a circle) arrive, moving in sync towards the box. The circle stops and the small triangle moves around some more. The big triangle exists the box (with the box opening in the same spot as before) and approaches the small triangle, coming close enough that one of its corners touches the small triangle. The small triangle moves so that its corner touches one of the sides of the larger triangle, and away, and then back again repeatedly, and then moves back. The large triangle then moves quickly towards the small triangle and the small triangle moves away once the large triangle touches it. The story continues like that for one minute and five seconds total.

As Meg Mitchell points out (e.g. in this TWiML episode): if we’ll do that much interpretation of just shapes, how much more do we do with language?

These facts — that relationships are central to human experience and that we are primed to imagine minds in inanimate objects even knowing they aren’t there — are important context for the discussion below. The (real) disabled people that Agüera y Arcas cites are (it should go without saying!) fully human and thus live in networks of relationships to other humans. The fact that we’re primed to imagine minds where there are none is one factor that can lead researchers off the rails when trying to argue for a lack of distinction between machines and (some?) humans.

[Edit 2/2/22: It has been pointed out to me the phrase “theory of mind” has been used in ableist ways to dehumanize autistic people and also this section can be read as saying that forming relationships in the way that neurotypical people do or learning language in the way that neurotypical people do are necessary conditions for being human. I want to be clear that this is not the case. Autistic people are people; fully human people.]

Unexamined, unsupported analogies between neural nets and human neurophysiology

Before getting to my main point, I want to spend a little time with the ways in which Agüera y Arcas cavalierly assumes that neural nets share properties with human brains, because this anthropomorphization gives a veneer of plausibility to his harmful analogies to disabled people. Through rhetorical sleight of hand, he invites the reader to assume that neural nets are sorta kinda human, and so it is not so far-fetched that (some) humans might be (more) like them. Taking some quotes from his essay:

“it’s unclear how we can meaningfully test for the ‘realness’ of feelings in another, especially in a being with a fundamentally different ‘neurophysiology’ from ours”

The “beings” in question here are language models. To say that they have “a fundamentally different ‘neurophysiology’ from ours” is to presuppose the false claim that they have a neurophysiology in the first place (scare quotes or no).

“Fundamentally, concepts are patterns of correlation, association, and generalization. Suitably architected neural nets, whether biological or digital, are able to learn such patterns using any inputs available.”

Here, again, Agüera y Arcas is presupposing that there is a class of things (neural nets) which has two subclasses (biological and digital), which share key properties (being able to learn patterns, which, according to Agüera y Arcas, are concepts). He is not arguing for this position, nor even asserting it, but introducing it into the discourse as a presupposition, and counting on his readers to accept it in order to ‘keep up’ with the essay. The fact that digital ‘neural nets’ are called that isn’t a reflection of their inherent nature, but rather a metaphor — the scientists who designed neural nets took inspiration from mid-20th century understanding of neurons. 21st century scientists should know better than to assume that the name is indicative of actual equivalence.

“Neural activity is neural activity, whether it comes from eyes, fingertips, or web documents”

Here, Agüera y Arcas invites the reader to equate neural activity experienced by humans, using our senses of sight or touch or our ability to read, with something that happens in language model when it processes web documents. But what is “neural activity” in a language model? What is the evidence that it bears any connection at all to neural activity in humans?

“Knowing what we now know, it would be hard to claim that a biological brain can encode or manipulate these patterns in ways that a digital neural net inherently cannot.”

“Knowing what we now know” is apparently a call out to some kind of scientific literature, but no citations are provided. This sentence seems to assert that we have hard evidence against the claim that brains are different in this way from digital neural nets. (I rephrased from Agüera y Arcas’s “biological brain” to just “brain”, because “biological brain” suggests that there other kinds, another unwarranted presupposition.) But what is it that we know that supports this assertion? It seems that the burden of proof lies squarely with those who would like to claim that their artificial constructs are equivalent in some way to actual brains.

Spurious analogies to disabled people

In his effort to convince the reader that large language models (LLMs) can understand language (and possibly could even soon have internal lives and experiences), Agüera y Arcas calls on both the imagined experiences of a fictitious person with a “constellation of disabilities and superpowers” and the writings of actual individuals who are/were blind (Daniel Kish) or Deafblind (Helen Keller).

Regarding the fictitious person he writes:

“Though it’s a stretch, we can imagine a human being with a very odd but perhaps not inconceivable constellation of disabilities and superpowers in a similar situation. Although extremely well-read, such a person would be deaf and blind, have no sense of touch, taste, or smell, be totally dissociated from their body, be unable to experience visceral responses, and have total amnesia (inability to either form or recall episodic memories about their own life, living in what has poetically been called a ‘permanent present tense’).”

How, exactly, is this person supposed to read (“well-read”)? How are they, with no way of sensing the world or other people in it, supposed to form the relationships that would let them learn language and learn from other people? If the idea is that the reading etc all happened before the catastrophic event that left the person “totally dissociated from their body”, then the analogy breaks down. LaMDA never had such experiences.

Under the heading “modality chauvinism” (one place among many where he appropriates the language of people pushing back against oppressors to stand up for … language models?), Agüera y Arcas calls on the experiences and writing of Daniel Kish and Helen Keller, to argue that no one sense (i.e. sensory system) is required for humans to develop concepts. This is a strawman argument: when I (and others) argue that LLMs aren’t understanding language, we’re not saying that they aren’t understanding because they lack specific senses. We’re saying that they aren’t understanding because nothing in their training regimen provides a way to learn the relationship between linguistic form and meaning. Humans gain access to meaning (conventional, linguistic meaning and communicative intent), as they are learning linguistic systems, through intersubjective awareness, which in turn relies on our senses, but isn’t specific to any one of them. (Again, for more details, see Bender & Koller 2020.)

So, the experience of disabled people is irrelevant to the argument and invoking these people’s experience as if it were is problematic. The problem here is the same as the one Mitchell pointed out regarding Agüera y Arcas’s argument based on “personhood” (in the eyes of those in power) being extended to enslaved people:

Agüera y Arcas is making the analogy “LLMs are like Deafblind people”, ostensibly to show how LLMs are more like people. But he hasn’t shown (and, I’d argue, can’t) that LLMs are like people, with internal lives, relationships, and full personhood. So the analogy ends up dehumanizing Deafblind people, by saying they are like something that is patently not human, and saying so specifically because of their disability. And even if you believe that LLMs might be somewhat human-like, the analogy is still dehumanizing, in saying that Deafblind people are closer to these (ostensibly) partially human-like objects than other people. And that, in turn, suggests that non-Deafblind people are more fully human.

Beyond this fundamental issue, the way in which Agüera y Arcas invokes (and apparently misinterprets) specific individuals’ experiences is also problematic.

Agüera y Arcas quotes Daniel Kish, a blind man who uses human sonar, as saying:

“We know from other studies that those who use human sonar as a principal means of navigation are activating their visual brain. It’s the visual system that processes all of this, so vision is, in that sense, occurring in the brain.

“It’s flashes. You do get a continuous sort of vision, the way you might if you used flashes to light up a darkened scene. It comes into clarity and focus with every flash, a kind of three- dimensional fuzzy geometry. It is in 3D, it has a 3D perspective, and it is a sense of space and spatial relationships. You have a depth of structure, and you have position and dimension. You also have a pretty strong sense of density and texture, that are sort of like the color, if you will, of flash sonar.”

… and then asks: “So, neither eyes nor light are required for vision; the brain can learn to use other inputs. How far can one take this?” But: Kish was referencing studies on the brain’s visual system, showing that the visual system can be activated by other senses (which then produces a sense of vision). Agüera y Arcas provides no evidence that LLMs have anything analogous to a visual system.

He also quotes Helen Keller, first inaccurately describing her as “born both blind and deaf”. According to Keller’s own autobiography, she became blind and deaf as the result of an illness at 19 months old. It seems incredibly disrespectful towards Keller to make such assumptions about her experience and (apparently) not even bother to verify them. He provides two quotes from Keller. First:

“People often express surprise that I, a deaf and blind woman, can find my greatest enjoyment in the out-of-doors. It seems to them that most of the wonders of nature are completely beyond the reach of my sealed senses. But God has put much of his work in raised print […]”

Agüera y Arcas seems to willfully misinterpret this quote, writing “This last rather beautiful turn of phrase refers both to the tactile nature of the world, and to Braille specifically — that is, the central role of text in Keller’s universe.” and thus inviting the reader to imagine that Keller’s experience of the outdoors is mediated by reading language (written in Braille). The implied analogy here is to language models “appreciating” the great outdoors by “reading” what humans have written about it. But Keller is describing her own, direct, experience of being outdoors and how she personally, directly, enjoys it — through her senses, including her sense of touch. (And while sighted people like me might not notice so keenly, the sense of touch isn’t limited to our fingertips: we also experience the world through what it feels like to set our feet on different surfaces, not to mention the feel of wind or sunshine on our skin, and so on.)

The second quote concerns color:

“[…] for me, too, there is exquisite color. I have a color scheme that is my own. I will try to explain what I mean: Pink makes me think of a baby’s cheek, or a gentle southern breeze. Lilac, which is my teacher’s favorite color, makes me think of faces I have loved and kissed. There are two kinds of red for me. One is the red of warm blood in a healthy body; the other is the red of hell and hate. I like the first red because of its vitality. In the same way, there are two kinds of brown. One is alive — the rich, friendly brown of earth mold; the other is a deep brown, like the trunks of old trees with wormholes in them, or like withered hands. Orange gives me a happy, cheerful feeling, partly because it is bright and partly because it is friendly to so many other colors. Yellow signifies abundance to me. I think of the yellow sun streaming down, it means life and is rich in promise. Green means exuberance. The warm sun brings out odors that make me think of red; coolness brings out odors that make me think of green.”

In response, Agüera y Arcas writes: “This passage should give pause to anyone claiming that LaMDA couldn’t possibly understand ‘redness’.” But this is reducing Keller’s experience, which like any other human includes relationships and embodied experience of the world, to that of a language model which only has access to linguistic form. Even if it were understanding the linguistic forms it is fed (as Agüera y Arcas persistently assumes), this is saying that Keller (and other Deafblind people) don’t have experiences of their own of things like earth mold being friendly or feeling cheerful, but only know these things second-hand, from the words of others.

Why ask these questions?

In deciding what it means to say that a machine “understands” or “has experiences” or “has feelings”, it’s worth asking why we’re asking that question. What would a “yes” mean and what would follow from it?

If the point is to build effective and trustworthy technology, then I think these are the wrong questions to ask. In that scenario, we should be asking questions like: What do we want this system to do? How do we verify that it can carry out those tasks reliably? How can we make its affordances transparent to the humans that interact with it, so that they can appropriately contextualize its behavior? What are the system’s failure modes, who might be harmed by them and how? When the system is working as intended, who might be harmed and how?

If the point is to learn something about human cognition, then we might ask questions like: What makes this system analogous to humans, in what way, and what are the limits of the analogy — and how can it support the reasoning about human cognition we are interested in?

Agüera y Arcas’s essay, however, promoting as it does a Google product, seems more focused on asking these questions because a “yes” answer (or a “you can’t prove it to be no” answer) makes the technology look impressive. But whose interests does that serve?

Takeaways

I think the one valuable thing we can learn from this essay is to use it as an object lesson in how not to draw on the experiences of disabled people, imaginary or real. To be clear: the lived experiences of disabled people are highly relevant to the development of technology, including technology developed specifically for disabled people and technology that is designed to be general purpose. Currently, disabled people are underrepresented in technology research and development, which means the field is lacking critical perspectives. Researchers without the lived experience of disability should absolutely be looking to the expertise of disabled people, but that is not what Agüera y Arcas did.

Agüera y Arcas isn’t the only one to try to draw such problematic analogies (I frequently see it on Twitter when people want to argue that LLMs are or might be “understanding”), but it is particularly elaborated in his essay, including not only hypothetical people but two actual individuals. Agüera y Arcas even quotes their own descriptions of their lived experience and then misinterprets them to suit his purpose of bolstering the idea that LLMs might just be deserving of the personhood he seems so eager to ascribe to them — and to encourage others to ascribe as well. It is my hope that in highlighting this aspect of the essay, and attempting to describe where it goes off the rails, I have made it easier for readers to identify this type of problematic analogy when they spot it (or feel the urge to make it themselves).

Acknowledgments

With thanks to Leon Derczynski, Alex Hanna, Cat Hicks, Sébastian Hinderer, Timnit Gebru, Haben Girma, and Alexander Koller who gave me valuable feedback on earlier drafts of this blog post. Thanks also to Meg Mitchell for both valuable feedback and permission to quote her tweets.

*) Just as I was finishing this post, I saw that an arXiv publication (Thoppilan et al 2022) about LaMDA has gone up.

Note on Jan 23, 2022: Since publishing this, I have updated the writing to reflect identity first language (“disabled people”) throughout. As originally written, it had a combination of identity first language (e.g. “Deafblind people”) and person first language (“people with disabilities”). From what I have read (e.g. this guidance), there is variation in how people prefer to be described, but especially among blind and Deafblind people, identity first language appears to be more frequently preferred. My thanks to Liz Jackson for pointing out the importance of this.

--

--

Emily M. Bender

Professor, Linguistics, University of Washington// Faculty Director, Professional MS Program in Computational Linguistics (CLMS) faculty.washington.edu/ebender