“Sentient?” “Conscious?” Who cares?

N. R. Staff
Novorerum
Published in
5 min readJan 30, 2023

--

Photo by DeepMind on Unsplash

When people worry about artificial intelligence — AI — almost always what we worry about is whether AI is becoming “conscious.” That worry surfaced last summer, when Google employee Blake Lemoine said that AI had already achieved consciousness. Lemoine was ultimately dismissed from Google, and the issue never got more than passing attention.

Now, in quick succession, a raft of new AI applications — DALL-E, Dall•E 2 and ChatGPT, all from OpenAI — have brought a fresh round of worry. And more of these programs, are coming along, faster and faster.

But this article is not about these ever-increasing programs. It about us.

What’s really behind our worry that AI is becoming conscious or sentient? Let’s look at what the real reason we have cause to worry.

It’s important at the beginning to understand that there is still no real definition — or really any true clarity — about what “conscious” or “sentient” even means. The Lemoine debacle this past summer had techies squaring off over definitions. Even if AI was “sentient,” it wasn’t “conscious.” “Sentient” meant “the ability to experience sensations” , whereas conscious meant — well nobody really knows what it means, despite years and years of trying to figure it out.

Trying to figure out what “consciousness” actually is has almost always been the exclusive realm of philosophers. Among them, Thomas Nagel is famous for his 1974 paper in which he wrote that he didn’t “know what it is like for a bat to be a bat”, but that “if I try to imagine this, I am restricted to the resources of my own mind,” which are “inadequate to the task.” Nagel wrote that “an organism has conscious mental states if and only if there is something that it is like to be that organism — something it is like for the organism.”

Nagel’s Bat paper is pretty famous among those who try to figure out consciousness, and that passage I just quoted is always quoted when people write about it. Nagel seems to conclude that the “subjective character of experience…is not analyzable in terms of any explanatory system of functional states, or intentional states” — or, really, anything, he seems to say. We are left with — ourselves.

This is coming from a philosopher, mind you — not a tech person.

Now neuroscience has gotten into trying to figure out exactly what consciousness is. (It seems to be the nature of science to believe that it can figure out everything eventually.) If only the brain is unraveled enough, says neuroscience, then it will be figured out. Paul and Pat Churchland, a couple, both professors of philosophy, are among those who have thought neuroscientists would find the “answer to consciousness.”

But what if there isn’t an “answer” — at least not in the way our technological society understands “answers”?

In recent years there have been a lot of books about consciousness, purporting to explain what it is and isn’t, despite the fact that there is still no real “hard science” about consciousness. One of these books is titled The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed. It’s by Christoph Koch, a neuroscientist with the Allen Institute for Brain Science in Seattle, and it ws published in 2019. Though Koch is a scientist, his understanding has come to resemble Nagel’s, and about as far from computational science as it’s possible to be.

“Consciousness is experience,” Koch writes. “ That’s it. Consciousness is any experience, from the most mundane to the most exalted.” He continues, “consciousness is lived reality. It is the feeling of life itself.”

Peter Godfrey-Smith, an Australian philosopher who wrote Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness, says in his later book Metazoa that “once sensing and minimal cognition are recognized everywhere, and once animals like octopuses and crabs are seen as sentient,… before you stretches a gradual slope that leads into plants, fungi, non-neural animals, protists, and bacteria.” All of these are “sentient,” he writes. “ (In Metazoa, he is careful to use the term “sentience”.)

“We are not extra, not additions to the physical world,” he writes, “but aspects of its workings.”

Biological life, not life created by technology.

Now that we’ve looked at what philosophers today have to say about it, doesn’t it seems impossible to believe that AI can ever be conscious?

Problem solved? Worries over? Not so fast.

The real problem, actually, is not whether or not AI can become “conscious.” The real problem, as I wrote above, is us. If we believe AI is conscious — in other words, if we treat AI as though its actions and statements are coming from a conscious being, a being we perceive as behaving like a conscious being — the result, for us, will be exactly the same as if it really were conscious. We will cede our authority to it. We will believe it.

It is our belief in AI consciousness that will prove to be our undoing.

In fact, this has already happened, again and again.

This is precisely what we have done with the the relatively primitive forms of AI that run the algoritims which power today’s forms of “social media.” Facebook? Twitter? YouTube? We believe whatever the algorithms serve up for us, unable, it seems, to make distinctions between what is real and what is fake. Fake news goes viral faster than real news. It influences every aspect of our lives. It changes the outcomes of political elections. It doesn’t need consciousness to do any of it. The conscousness is in us, and we do the damage. Because we believe the stuff, exactly as if it were coming from a conscious, biological being like us.

We have famously labeled ourselves Homo sapiens — the “sapiens” from the Latin word “to know” — but in truth we “believe” more often than we actually “know” something. We mistake ideas and imagined realities served to us by creatures created by technology as though they were ideas and imagined realities coming from conscious entities, and then we believe them. (“Imagined realities” is a term used by Noah Harari. Look it up.)

That’s why AI is so dangerous. It’s not about AI, it’s about us.

Next up: who’s to blame? Not us, at least not exactly.…

--

--

N. R. Staff
Novorerum

Retired. Writing since 1958. After a career writing and editing for others, I'm now doing my own thing. Worried about the destruction of the natural world.