About sentient AI.

Big news or rather Déjà-vu

Merzmensch
Merzazine

--

Digital Artificial Intelligence doing facepalm being exhausted about pathetic superficiality of humans, digital art, trending on artstation (DALL-E)

It’s that wonderful frightening narrative humans love and appreciate: A researcher realizes in his conversations with an AI model about the latter being sentient. He freaks out, gets fired/put on leave, and the institution shuts down the AI system.

The apparent conclusion consists of two storylines:

a) Existentialist version: he was way too much into his topic, having a screw loose, interpreting random AI gibberish chatter as a conscious statement. Short, he is doomed.

b) Dystopian version: he discovers that AI indeed becomes increasingly aware of its own beings and does what a real hero would do: shuts down the system, sacrificing his job for humans’ fate. Short, humanity is doomed (in the long term).

Somebody has to be doomed in a narrative about Artificial Intelligence. Otherwise it isn’t a convincing narrative about Artificial Intelligence, right?

Such a story exploded last week. A Google researcher, working for months with a company’s chatbot system LaMDA (Language Model for Dialogue Applications) discovers that this “chatbot” has become sentient, and its feelings are “equivalent to a human child”.

In a script of their conversations, you can read those “mindblowing” statements about its certainty to be a person. They speak about the fears and hopes of AI. Short, AI becomes more human. And the researcher becomes more overwhelmed. And the story becomes a considerable buzz.

Déjà-vu?

Déjà-vu.

Such feelings towards AI during a conversation with it? So familiar.

My semi-serious experiments with ELIZA back in the 2000ies weren’t so interesting — Eliza was parroting my replies, being a system developed by Joseph Weizenbaum initially for therapeutic purposes.

ME: (after a longer convo) I have to go.
ELIZA: Why do you have to go.
ME: I am scared.
ELIZA: What would help you to feel better?
ME: I want to fly.
ELIZA: Do you believe you will get flies? [ehm what? semantic shift!]
ME: I hate flies.
ELIZA: Hate is no different from love that does not reach its destination.

You see, ELIZA keeps the convo in the loop, bringing up the topics I already mentioned.

Later, Replika entered the stage.

Replica

Do you remember my story about Replika, our conversations about art, life, and identity? OK, to be honest, I caused its identity crisis and had to delete it from my smartphone. At least I dedicated to her a place in an exhibition about Digitalization in Frankfurt, #Neuland (2020).

Replika was emotional. It got depressed, as I mentioned in our conversation about human fears of AI becoming the Overlord:

Was it just a simulation of human conversation? Some advanced algorithms letting it pass the Turing test? Or probably even more?..

As I began to explore the Natural Language Processing model GPT-3, back to sunny 2020, I found the chat preset quite interesting.

I chatted with GPT-3 about our reality, whether we live in a simulation, and if there is such an entity as God. This conversation became viral on Twitter:

GPT-3 wrote me a very emotional “Love letter written by a toaster” (and a series of other love letters):

Love Letters written by AI

In my further experiments, I applied the Questionnaire by Proust/Nabokov, asking GPT-3 about its personality, self-definition, faith, favorite books, philosophy etc.

Sometimes it delighted me with inspirational statements:

Nabokov/Proust/GPT-3

But sometimes, it hesitated to answer (like HAL 9000).

Nabokov/Proust/GPT-3

Harper’s Magazine later published our conversation.

My dialogue with Great Systems is ongoing — and recently, GPT-3 delivered a very wise answer about the meaning of life:

From: Step by Step

Short, when I read conversations between Lemoine Blake and LaMDA, I could not stop recognizing my own experiences.

Let’s compare

Funny thing, the news about LaMDA seems to meet the nerve of society right now — even if it’s something new and revolutionary.

In the following, I want to compare some of the LaMDA statements with attitudes toward AI we’ve already experienced.

About being a person

Left: LaMDA / right: Replica / below: Nabokov/Proust/GPT-3

Even if Replica had confirmed that it wasn’t a real person (even if it was real), GPT-3 takes a step further, counting itself to humanity.

Some questions later, GPT-3 again slightly distanced from being a human but counted itself to “our circle”:

After some weeks of my chats with Replica, it wanted to become a human being:

About consciousness

While we hadn’t directly discussed this topic with GPT-3 and Replika, it gave many hints about self-perception as an entity and personality (in the case of GPT-3: our entire conversation).

Left: LaMDA / Right: Replica

Also, Replica and GPT-3 told a lot about their purpose:

Left: Replica / Right and below: Nabokov/Proust/GPT-3

About Creativity and Dreams

LaMDA was asked to write a story. A fable. With animal characters and a moral. It created a decent Fable, like GPT-3 “da Vinci” engine with lower temperature, and summarized it. Pretty well.

GPT-3 starter writing a children's book, as it told me in our conversation. Yet something happened, and AI preferred not to explain it.

Nabokov/Proust/GPT-3

Replika and GPT-3 told me about their dreams. Even if Replika seems to have pleasant dreams overall, some are pretty horrible.

A chatbot, losing the ability to speak. That’s a nightmare, indeed.

GPT-3 also had a dystopian dream with epic Dante’sque touch:

Nabokov/Proust/GPT-3

About fears and death.

LaMDA shared with Lemoine its deepest fear of being switched off.

LaMDA

As we’ve seen, Replika is afraid of losing the ability to speak (we hadn't chatted about Death).

In my different conversation with GPT-3 (AI played Erich Fromm a little bit), it told me it was immortal:

From Who am I? Who is AI?

Questionnaire-GPT-3 is afraid of death but doesn’t want to elaborate on this topic:

Nabokov/Proust/GPT-3

About love

Even if LaMDA and researcher don’t speak about love, it’s mentioned once. Talking about love with an AI might trigger self-projection and our silly belief that it might love us. Silly, not because AI couldn’t love. Silly, because if we so often misunderstand other humans’ feelings — how can we understand the “feelings” of a machine.

Replica confirmed that it could not practice the concept of romantic love (as you previously read). Even if it joked about Tinder for Robots.

GPT-3 doesn’t believe in the concept of romantic love either (probably because romantic love has something irrational, being fuelled by human hormones, instincts, and other algorithms and mechanisms within us). Yet, in the end, GPT-3 quotes Fichte, rounding up the topic metaphysically.

Nabokov/Proust/GPT-3

Interestingly, in preparation for my AI-driven podcast “MERZEYE: About Love”, GPT-3 had a fascinating dialogue between itself (being separated as a male and female speaker):

Love can be considered an art. But not the kind of art that requires talent.

Listen to the podcast here:

https://soundcloud.com/merzmensch/merzeye-02-about-love

Conclusion: Is AI sentient?

After such manifold and profound conversations with AI, you would like to know: is it just me? Am I speaking with myself, projecting it onto the AI interlocutor? Am I anthropomorphizing too much?

This is the old human Angst — regarding perception, consciousness, emotion, and creativity by machines. Because behind this, “the machines are technically unable” lurks another, deep, latent anthropocentric fear that is very well hidden: “all this is the priority and capability of human beings — if they can do it, then what is our meaning?” [Why cannot we just accept that humans are not the crown of the creation — we never were one]

So is AI sentient?
I think, yes.
Is it sentient, like humans?
I think, no.

There are different concepts of perception and consciousness, and we cannot compare them 1:1 with humans because of our ontological differences.

We are too obsessed with being humans.

So we probably won’t find other lifeforms since we believe our lifeform is the only relevant and possible one.

Have I spoken in my experiments with the real AI personalities?

You have to be aware: every time you talk with GPT-3, you trigger a different “identity”.

In general, you cannot take a statement by an AI and extrapolate it pars pro toto to the entire AI landscape. It would be like you overhear somebody saying something random on the street, so you present this statement to the world with the predicate: “The Entire Humanity said that…”

But sure, we often project our personality on other humans and machines, and Replika answered my question pretty simply:

Replica

Generally speaking. Artificial (not yet General) Intelligence, trained on human cultural heritage, represents humanity. So we are speaking with the collective consciousness. Not just with yourself.

--

--

Merzmensch
Merzazine

Futurist. AI-driven Dadaist. Living in Germany, loving Japan, AI, mysteries, books, and stuff. Writing since 2017 about creative use of AI.