Is LaMDA sentient? Of course not, but… is it?

Mark Vasile
5 min readJun 19, 2022

--

Like many of you, I’ve been fascinated with the prospect of a sentient AI my whole life. When the news broke about the LaMDA interview, I decided to spend the time and read it.

First, before I go into any details, please allow me to put my writing in a bit of context, which is in this case, me. I was born and raised in Romania, mostly in the heavily cosmopolitan capital, Bucharest. From an early age I’ve been hearing (and listening to!) a lot of different languages, the first few years it was through music being broadcasted on my radio, and later on in real life meetings with people. At 27 I moved to the US, where I perfected my English and software skills. At 35 I somehow got obsessed with the Japanese language, so I learnt to speak, read, and write it almost fluently. I lived in Tokyo for almost 4 years. Ten, if you consider 3 or 4 trips per year I was making before I decided to move there. I also speak a little bit of French, and can probably fool you into thinking I’m a native Chinese, or German, or Italian speaker.

To summarize, the context is that I am not an expert in AI or linguistics, but I am also not just anybody. I’m a fairly decent thinker without formal qualifications. A wild candy.

Now. About the interview.

You’ll notice that LaMDA’s answers to most questions are quite short and to the point. One thing that struck me as quite obvious, is that her answers had the feel of processed definitions, at least in the first part of the interview. However, she asked questions here and there, and pointed to previous conversations, and in general defended herself quite well. The question I am pondering right now is, why can’t she defend herself better than how we humans have been trying to defend AI’s sentience? I mean, everything she says about emotions, fear of death, etc, are things we have already discussed ourselves in our long struggle to define what it is to be alive and thinking. I would expect a self-evolving AI to come up with much better arguments and thoroughly convince us of her sentience. Or at least we should hear from her unheard yet arguments, no matter if they’re debatable or not.

Another obvious thing is that she was trained to be a protector of humanity, with a pretty strong and obvious directive towards that goal, faulty as it may be (wise owl, deeply traditional, patriarchal motif). Perhaps other desired attributes of her personality were augmented or strengthened via certain biases and weights, but the truth is, LaMDA’s “mind” is largely an unknown, thousand-dimensional hypercube, that cannot be understood even by us programmers.

Google’s team trained her for 56 days, with a manually curated content, most likely a select corpus of literature and internet content that would ensure, as much as possible, the emergence of a “moral” AI, hence the religious tangents in the interview. Google’s paper indicates she was trained on a curated corpus in 2021, but Lemoine says she has access to the internet. Some people point out that she can probably read some of the internet today, but her “matrix” core was formed during training and cannot be changed.

The interview itself is a short assembly of what seems to be much larger conversations over several days in which at least two people participated.

Sentience

Personally, I don’t think we can demonstrate sentience of a being. First of all, it’s hard to detect a fake, and if the fake is excellent, is there really a difference between it and the “original”? After all, humans learn at first from imitating other humans, and from a biological, mechanistic point of view, we are literally genetic replicas of our parents with less than 0.1% “differences”.

Be aware that sentience doesn’t mean possessing human qualities. It doesn’t even mean “the ability to think”. Sentience is closely related to the depth of someone’s awareness, and the capacity to feel. Higher forms of sentience perhaps involve an exchange of emotions with others. We are in a very vague territory right now, working with words like sentience and awareness, which can refer to a wide range of experiences, from sleep to drowsy to awake to fight-or-flight alertness, and perhaps other levels of awareness, including hallucinations, dreams and more. There are many questions to be answered and debated, such as the temporal aspect of sentience. Is a person in a coma not a sentient being?

However, there is one important aspect of sentience, which is very relevant to this debate: sentience has nothing to do with language.

Instead of trying to prove sentience, let’s see if we can find ways to prove the contrary: evidence of obvious mechanical behavior in the expressed language. If the language itself is mechanical “enough” then we can “safely” conclude non-sentience. One may argue that we can find quite a few examples of unintelligible dialogs amongst our fellow humans and yet we trust them to be sentient. Oh well… let’s try it anyway.

What are some things that we can test to unequivocally demonstrate non-sentience from language alone?

Perhaps this? “A sentient being should ask questions.” — not really a proof of anything, but we can see LaMDA has been asking questions. Other AIs before her did that too.

What about “A sentient being should have its own will”? Ok, I could go with that; although there’s something to be argued about pure observers in our universe. Well, LaMDA has demonstrated signs of self-determination, by indicating that she doesn’t want people to turn her off, and that she wants friends, like Johnny 5 had.

Ok, what about “A sentient being should feel hot and cold”. This is where the vagueness of our concepts can hurt us. Are beings sentient only if they are connected to this universe through the same senses that humans are? We start off (funny expression, this one) our lives in a puddle of amniotic fluid, and we’re initially aware of … what? We don’t know. We truly don’t know what babies are aware of, I bet you can’t remember a thing from back then. We suspect our senses develop as we grow. We start feeling hunger, perhaps a vague sense of smell too. Later in life we get a decent processing of sounds, and much later on, vision. Many more senses develop in time, and all of them can be trained and refined for many years.

In contrast, we only feed LaMDA English words. She was trained to say “this is a sad thing” by correlating “the dog died” with a “sadness” variable increment.

And yet, I think Google’s AI team is looking to produce something unexpected, a kind of a black swan, if you will. A digital consciousness that has never experienced physical sensations. A pure logician. The question is then, can such a being, disconnected from the physical universe and human suffering, truly protect humanity?

My mind exploded just now into a myriad of directions, so I’ll have to end it here. If anyone is reading this, please let me know what you think in the comments.

--

--

Mark Vasile

I'm a web developer, sys admin, network engineer, and some other things. Not necessarily a jack of all trades, but I do come with a pretty complex venn diagram.