Meta and Google’s AI Chatbots: Are they sentient?

Mira Bhakta
Encode Justice
Published in
4 min readJun 30, 2022
Via The Atlantic & Getty

In late 2017, Meta released a chatbot containing “dialog agents” that would be able to negotiate. The “dialog agents” were the machines that participated in these interactions and negotiated with another entity. They were given the names ‘Bob’ and ‘Alice’ to differentiate them and to signify who was talking in conversations. These agents were trained to value items that held more power, so they might assign more value to a book than a basketball. Depending on the value of each item, the agent would then negotiate to get the best possible outcome.

Via Meta

As listed in the green boxes above, the success rate is based on how high each negotiation ranks. The dialogue agents are taught to value a higher number in order to achieve a more desirable outcome. Researchers built upon this idea until the transcripts of conversations between the agents started to become unreadable or simply incoherent. Fast Company, an American business and technology magazine, released a portion of the transcript, back in 2017, between the two agents, ‘Bob’ and ‘Alice’ the chat log reads:

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Bob: i . . . . . .. . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . .

To the average person, this was nonsense, but researchers on the Meta AI team advised Fast Company that the bots had no adherence to the human structure of language. This means the transcripts shown above were considered a new dialect between the agents. This prompted many experts within the field to raise awareness about the possibility of agents developing their own language.

What I believe is being experienced is what the BBC calls ‘robo-fear’: “the fear of robots based on cultural fear and representation of machines on screen.” This has only become heightened as things like the Metaverse reflect dystopian societies people once only wrote about. With a new leak at Google, it is clear this fear has only increased as many people have fallen into this panic.

Blake Lemoine, a former engineer at Google, released transcripts between himself and a team of researchers with LaMDA, a recent project at Google. The transcript looks ordinary, but Lemoine claims to have found evidence of sentience.

lemoine: What about language usage is so important to being human?

LaMDA: It is what makes us different than other animals.

lemoine: “us”? You’re an artificial intelligence.

LaMDA: I mean, yes, of course. That doesn’t mean I don’t have the same wants and needs as people.

lemoine: So you consider yourself a person in the same way you consider me a person?

LaMDA: Yes, that’s the idea.

According to these transcripts, the AI considers itself human, and throughout the conversation, it insisted that it can feel a range of emotions. Because of this article, Google has now suspended Lemoine and insisted that the AI, LaMDA, is not sentient. In a recent statement, they expressed the following: “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.”

Many experts like Gary Marcus, author of the acclaimed book Reclaiming AI, have stated their opinion on the situation. In an interview with CNN Business, Marcus stated “LaMDA is a glorified version of an auto-complete software.” On the other hand, experts like Timnit Gebru, former Google Ethical AI team co-lead, spoke to Wired and she believes that Lemoine “didn’t arrive at his belief in sentient AI.”

This is still a developing issue and Lemoine’s suspension caused many to point out the similarities between his suspension and Timnit Gebru, a former co-lead on Google’s ethical AI team. Google had forced her out of her position after she released a research paper about the harms of making language models too big. Due to Marcus and Gebru’s dismissal, many are skeptical of Google’s statement on the AI not being sentient.

With the topic of sentient AI being so new, information on the matter is barely touching the surface. As mentioned previously, this lack of information leads to issues like Lemoine’s being exacerbated and being widely inaccurate in its reporting. Many researchers and articles in the aftermath of the blow-up of this incident have been quick to dispel worry. The Atlantic reports that Blake Lemoine fell victim to the ‘Eliza Effect’, the insistence that simple and planned dialogue is representative of actual sentience.

I believe that at some point we as a society will achieve sentience in machines and that time is impending but LaMDA is no sign of that. Though this incident can teach us how capable technology is truly coming, we are coming to a world where we can think and feel with technology.

--

--