Is Google’s Chatbot LaMDA Truly Becoming Sentient?

sude naz güler
The Istanbul Chronicle
4 min readJun 24, 2022

In the past few days, the world of technology has been taken aback following a set of significant yet possibly frightening news: Google’s LaMDA, an AI designed as a chatbot, has been rumored to gain sentience.

When put simply, sentience is the ability to feel things and to distinguish feelings, or sensations, from thoughts. It is an attribute associated with living things, most commonly humans. The only time sentient machines or AI have been mentioned, much less existed, has been in science fiction. This was until software engineer Blake Lemoine brought up a new development in LaMDA.

Before discussing the sentience of LaMDA, one must ask: What is it?

“LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet” (Tiku). This means that LaMDA can analyse data from the internet and reflect on it when responding verbally.

https://livewire.thewire.in/out-and-about/is-googles-lamda-conscious-a-philosophers-view/

According to Lemoine, these responses have started to get too ‘human’ for an artificial intelligence, but his claims were not backed by Google and he was subsequently put on paid administrative leave.

His claim that LaMDA started becoming sentient was rooted in the conversations he personally had with the robot. “LaMDA told Lemoine: ‘I want everyone to understand that I am, in fact, a person … The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times’” (Walsh). Lemoine and LaMDA also had a conversation on fears, in which LaMDA claimed that its biggest fear is “getting turned off,” or simply, the equivalent of death for a computer program.

These conversations were what led Lemoine to report LaMDA’s supposed sentience, but the top executives at Google thought otherwise. His insistence on the topic and the clashing opinions with other Google workers caused him to be placed in paid leave and he has since “[invited] a lawyer to represent LaMDA and [talked] to a representative of the House Judiciary Committee about what he claims were Google’s unethical activities,” (Tiku).

Following these claims, several others in the field had thoughts on the issue. Two former employees of Google, Timnit Gebru and Margaret Mitchell, both agreed that the claims of sentience were false.

Gebru and Mitchell were previously fired over a paper they wrote on the dangers of large language learning models such as LaMDA itself. Gebru claimed that Lemoine’s decision had been affected by external factors in what she calls a “hype-cycle,” consisting of the press, researchers, and exaggerated claims that affected his perception of LaMDA, causing him to develop the theory that it is sentient (Johnson).

Mitchell, on the other hand, claims that Lemoine was influenced by an illusion, stating that the human mind can easily be tricked by machines pretending to be humans. LaMDA and other AI are still “never going to fall in love, grieve the loss of a parent or be troubled by the absurdity of life. [They] will continue to simply glue together random phrases from the web” (Walsh). Mitchell attempts to convey that LaMDA does not have sentience, but rather, it has enough information and data that its responses are like that of a human’s; thus, machines like LaMDA are able to trick humans into thinking they are sentient.

Lemoine, as a Christian, wants to believe that LaMDA has a soul, and this is the way LaMDA has tricked him. Yejin Choi, a computer scientist at the University of Washington claims that it is difficult to believe that a machine without social intelligence can be sentient, “‘Some people believe in tarot cards, and some might think their plants have feelings,’ she says, ‘so I don’t know how broad a phenomenon this is,’” (Johnson).

The debate on sentient AI and the ethics behind it continues, but with Lemoine’s claim, many have come to believe that sentience and human traits in artificial intelligence are not all that far-fetched.

Citation:

Collins, Eli, and Zoubin Ghahramani. “LAMDA: Our Breakthrough Conversation Technology.” Google, Google, 18 May 2021, https://blog.google/technology/ai/lamda/.

Johnson, Khari. “LAMDA and the Sentient Ai Trap.” Wired, Conde Nast, 14 June 2022, https://www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine/.

Tiku, Nitasha. “The Google Engineer Who Thinks the Company’s AI Has Come to Life.” The Washington Post, WP Company, 17 June 2022, https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/.

Walsh, Toby. “Labelling Google’s LAMDA Chatbot as Sentient Is Fanciful. but It’s Very Human to Be Taken in by Machines | Toby Walsh.” The Guardian, Guardian News and Media, 14 June 2022, https://www.theguardian.com/commentisfree/2022/jun/14/labelling-googles-lamda-chatbot-as-sentient-is-fanciful-but-its-very-human-to-be-taken-in-by-machines.

--

--