Does AI Really Understand?

Kentaro Toyama
AI Heresy
Published in
7 min readNov 17, 2023

--

Image generated by DALL-E 3 in 2023: A cyber-techno figure shaped like a human head with a giant question mark on top, against a background of miscellaneous circuitry.
Image generated by DALL-E 3. (The author believes in the non-copyrightability doctrine of AI-generated images, so he doesn’t claim copyright over this image, and doesn’t believe anyone else should, either!)

Commentary about today’s AI often touches on the question, “Does it really understand?” Those who attempt a response tend to fall into two categories: Some say, “It’s as good as understanding, if the system can perform information processing tasks as well as (or better than) a human expert.” Others say, “No, the computer doesn’t really understand anything, and therefore, it’s not as good as human.” I tend to agree with the first group, but in equating understanding with performance, I think they’re ignoring subtle shades of the question. Meanwhile, the latter group seems desperate to hold on to some human exceptionalism that, day by day, is being disproven by AI advances. In any case, both groups fail to acknowledge a critical ambiguity in the question. Resolving that ambiguity is essential for coming to any meaningful conclusion about whether AI can understand.

In the context of modern AI, the question of understanding goes at least as far back to 1980, when the philosopher John Searle published a seminal article titled, “Minds, Brains, and Programs.” In it, he concluded that a computer could never truly understand anything, or as he put it, “No program by itself is sufficient for thinking.” [i] The paper described a thought experiment widely known as the “Chinese Room,” in which is Searle, a non-Chinese-speaker, who is passed written messages in Chinese. He is also provided with detailed manuals in English (the language he speaks) that provide instructions for processing the messages and writing Chinese responses. The instructions are, of course, very complicated, but following them results in credible Chinese-language responses, which are then returned to the sender. Searle’s claim was that a Chinese speaker interacting with the room would assume there was a person inside who understood Chinese, but in fact, Searle wouldn’t have a clue what the messages meant — all he would be doing is manipulating meaningless symbols. Searle’s conclusion, at least as it’s often represented, was that the Chinese Room was both feasible and tantamount to any AI system — all processing, but no understanding — and therefore, no computer system could ever truly understand.

Searle’s article is a cornerstone in the philosophy of AI, and many thinkers responded. Some agreed with Searle. Others said that “understanding” was in the system — the combination of Searle and the manuals and the room, and that the system’s understanding was different from Searle’s understanding. Still others disputed the very possibility of the Chinese instruction manual. But, what surprisingly few of the responses did was to engage with a critical underlying question, namely, What exactly is “understanding”? In the paper, Searle himself brushed aside this question, and I think in doing so, he failed to grapple with the crux of the Chinese room.

Normally, we interpret “understanding” to mean the deep apprehension of the truth of a thing. When someone says, “I understand addition,” we take that to mean that they know what it means to add, that they know how to add numbers (even multi-digit numbers that require a carry), and even that they could explain it to someone who doesn’t know. Someone who doesn’t understand cannot do those things. Therefore, “understanding” is considered equivalent to an accurate grasping of the concept.

But, this interpretation is flawed. Most of us have had the experience of believing that we understand something, only to discover when quizzed about it in detail that we can’t demonstrate the understanding — which suggests that we didn’t quite understand it after all.[ii] That is, the feeling of understanding is not the same as the reality of comprehension. People can have the feeling without the comprehension. Based on personal experience, I believe people can also have the comprehension without the feeling, though that is rarer.[iii] Now, that in turn raises the question of “What is comprehension?” which deserves further consideration, but for now, let’s say that comprehension is the ability to process information related to a given topic or concept sufficiently well that one can practically apply it toward relevant tasks. So again, if you comprehend addition, you can perform addition with arbitrary numbers.

Distinguishing between understanding-as-a-feeling and understanding-as-ability-to-process-information is critically important when discussing understanding.

When “understanding” is used by people to describe themselves or other people, I believe they are mostly referring to the feeling. My guess is that the frontal cortex sends a signal to the conscious self that the piece of knowledge in question has crossed a threshold in the brain, a threshold that is related to the degree to which the new knowledge has been integrated with previous knowledge.[iv] When you receive the signal, or seek it by asking yourself if you understand, you experience a feeling — the sense of understanding. And, while that feeling is not a guaranteed sign of whether we can do the associated information processing, the two often seem to correlate, so in vernacular speech we tend to conflate the feeling with true insight.

When “understanding” is used to describe machines, though, we have to de-conflate. Whether an AI system has the feeling of understanding, or any feelings at all for that matter, is a philosophical and scientific question to which the honest answer for now is we just don’t know. But, whether a system can do information processing is something I believe we can test and measure, just as teachers regularly assess their students. So, we should stop asking, “Does it really understand?” and instead ask two separate questions: “Does it have the feeling of understanding?” and “Is it able to process information well with respect to a topic or task?”

Separate it into those two questions, and then, we can have real answers. Take computer chess. “Does my smartphone’s chess app have the feeling of understanding?” I very much doubt it.[v] “Is it able to process information about chess well enough to beat the best human players?” Definitely. How about ChatGPT? “Does it have the feeling of understanding?” Again, I doubt it.[vi] “Is it able to process information about Shakespearean sonnets well enough to rival average English speakers?” It seems so!

Going back to Searle, he dismissed his colleagues’ attempts to deconstruct what he meant by “understanding,” but by failing to think it through, I think he came to a confusing conclusion. He seems to mix up the two definitions of “understanding.” (And, he applies the mix-up to a range of phrases: “thinking,” “mind,” “cognitive state,” etc.) The generous reading would be that he was saying understanding-as-a-feeling was beyond computers, but that understanding-as-information-processing was theoretically possible.[vii] In fact, the very construction of the Chinese room seems to imply a belief that understanding-as-information processing is mechanically possible. But, then, Searle also wrote, “The programmed computer does not do information processing.” Huh?[viii] Whatever the case, his conclusion would have been much easier to understand if he had just said, “No computer can have understanding-as-a-feeling, but some can perform understanding-as-information-processing.” Map that conclusion to AI, and it becomes a claim that I wholly buy into: “AI isn’t sentient or conscious, but it will one day match and exceed human intelligence.”

Notes

[i] Actually, he hedged his claim: Computers would not be able to understand unless they had “internal causal powers equivalent to those of brains.”

[ii] As a teacher, I will say that I see this all the time in my students. They will nod vigorously and say, “Yeah, I get it!” but then when asked to demonstrate the knowledge, say, on a test, they get basic things wrong, sometimes spectacularly. And, it’s not just a question of having forgotten some piece of information; it’s that they didn’t seem to have grokked the core logic in the first place. In my own experience, mistaken understanding occurs frequently when listening to someone speaking a language I only partially understand. I have a strong sense that I understand what they’re saying, perhaps because I recognize a couple of words here and there, only to see upon closer introspection that I really have no idea at all.

[iii] A handful of times in my life, I’ve had the feeling that I didn’t understanding something completely, only to realize at some point later that actually, I knew all there was to it. Those moments were very few in number, but they have stayed with me because the moment of epiphany was both sudden and pleasant.

[iv] It seems to be the case that people have different thresholds — some people are very clear when they understand or don’t understand; others seem to make frequent mistakes about their own understanding. It also seems possible that even with a single individual, there are different thresholds based on the context. A professional soccer player, for example, might have a finely tuned sense for whether she understands different team strategies, but have a poor sense for whether she understands different aspects of calculus.

[v] But, I can’t say for sure. I also can’t say for sure that you, even if you’re a human reader, actually feel or experience anything.

[vi] And, I doubt it, even when it spits out text that says it does!

[vii] David Chalmers, the philosopher of consciousness, appears to have come to a similar conclusion about Searle’s piece: “It is fairly clear that consciousness is at the root of the matter.” By consciousness, Chalmers means, “the subjective quality of experience.” In this article, I’m assuming that consciousness is a requirement for feeling.

[viii] To be fair to Searle, he did seem to understand and consider the objection, and in doing so, referred to the fact that computers don’t understand meaning. Meaning! Meaning, I believe, is deeply tied to understanding, and therefore, also related to feelings.

--

--

Kentaro Toyama
AI Heresy

W. K. Kellogg Professor, Univ. of Michigan School of Information; author, Geek Heresy; fellow, Dalai Lama Center for Ethics & Transformative Values, MIT.