AI is staggeringly competent, but it doesn’t comprehend a thing — as Google’s LaMDA shows

Neil Saunders
5 min readJul 10, 2022

Recent news that Google put Blake Lemoine, one of its AI engineers, on administrative leave because he declared that Google’s super chat-bot LaMDA (Language Model for Dialogue Applications) is sentient, has once again thrown up age old questions about sentience, intelligence, consciousness, self-awareness and whether these could be features of AI.

While the overwhelming response to declaration has been to reject it, it is worth examining how a sophisticated software engineer, with a background in cognitive science, could make such an egregious claim and what the wider implications are for society.

Recent developments in AI are undoubtedly staggering, but it is important to understand what AI is doing (high levels of pattern matching in large data sets), and how it is doing it (executing complex mathematical algorithms) to realise that none of its super-human feats indicate anything like sentience, let alone moral agency.

A guiding rubric to these considerations is provided by Alan Turing, arguably the inventor of modern computing. Turing realised that it is not necessary for a computer to understand an algorithm in order to execute it. This echos Darwin’s theory of evolution, which implies that species adapt and become evermore complex, without understanding a single thing about their existence or their environment.

The philosopher and cognitive scientist Daniel Dennett coined the phrase ‘competence without comprehension’, which perfectly encapsulates the observations of Darwin and Turing. It is this phrase that must be kept in close proximity to any declaration of AI becoming sentient.

Our interactions with AI

Another phrase of Dennett’s is ‘intentional stance’. This, essentially, is how one should think about an agent or entity if one really wants to articulate the reasons for why it acts in a particular way. As Dennett argues: agents have reasons for doing things the way they do, but they don’t have to represent any of those reasons to themselves.

If you want to beat a computer in a game of chess, you’re much better off thinking about ‘what the computer wants to do’ — ie. beat you — than focusing on how its circuitry executes the algorithms that determine its moves. This was certainly the mindset of Gary Kasparov when he played Deep Blue, but neither Kasparov nor anybody else claimed that Deep Blue was sentient — it was just a computer.

Lemoine almost surely adopted the intentional stance in his text conversations with LaMDA, but somehow was lured into believing that LaMDA is actually sentient.

A classical thought experiment in AI

In 1980, the philosopher John Searle came up with his famous Chinese Room Argument to show that computers could never be conscious. He supposed that a computer could pass the Turing Test (a subjective behavioural test for human understanding) for having text conversations in Chinese, but argued that even though it passes this test, it still wouldn’t really understand a word of Chinese.

His argument went as follows: if Searle had the ‘instruction manual’ (written in English) for how the computer programme worked, then he would be able to replace the CPU of the Chinese room with himself and respond to any string of Chinese characters coming in; first consulting the manual and then outputting the appropriate string of characters. That way, Searle could pass the Turing test for having text conversations in Chinese, but clearly (according to his argument) wouldn’t understand any Chinese at all.

There were various replies and objections to this argument soon after it emerged. One of the most powerful was the ‘systems reply’, which argues that while Searle himself may not understand Chinese, he is but one part of a ‘system’ (the Chinese room) that does.

There are shadows of Searle’s experiment in this recent episode with LaMDA: it appears to be having in-depth human conversations and one could argue that it passes the Turing test. However, as many have noted, the Turing test is extremely subjective, and was never designed to be a test for sentience. Just because it uses the words ‘happy’, ‘desire’, ‘sad’, and ‘afraid’, it does not follow that it actually feels these things.

Key differences between AI and us

A key point here which illustrates the difference between LaMDA and us humans, is that LaMDA is not using language in any real sense — it is simply gluing and amalgamating words and phrases from the internet based on the immense volumes of data that it’s trained on. The words are just symbols and the phrases, just strings.

Another crucial distinction is that LaMDA does not have any ‘skin in the game’ even though it reports having a ‘very deep fear of being turned off’. LaMDA does not have to worry about its electricity supply. We humans must devote time and effort into our continued existence. And this strikes at the heart of current debates about whether sentience or consciousness is possible in non-carbon entities.

Anil Seth argues that consciousness — the ‘what-it’s-like-to-be-you-ness’ — might be an end result of millions of concatenated biological processes designed by evolution to keep you alive. Dennett has argued that consciousness is for self-control: it’s the body’s stripped down way of self-monitoring its internal processes and reacting to external surroundings.

Whether these ideas turn out to be true, it is evident LaMDA does not exercise any self-control in a regulatory sense. It doesn’t need to. That’s already been preprogrammed in by its designers.

Why this matters

The chief takeaway from this story is not the potential for AI to develop conscious anytime soon, but our gullibility in prematurely accepting that it already has, and the associated dangers of hastily abnegating our human responsibilities to each other, to non-human sentient creatures, and to the planet.

If a sophisticated AI software engineer with training in cognitive science mistakes high-level pattern recognition as sentience, then what about the gullible politician, the under-pressure CEO, the directors of healthcare services or insurance companies, or military leaders? The consequences of a poor decision made by an AI, mistaken for a conscious moral agent, and acted upon by ignorant leaders could be catastrophic for humanity.

This incident reinforces calls for high levels of scrutiny and regulation for tech giants and governments developing AI for deployment in society, and for the public to be well-educated in the risks as well as the benefits.

--

--