Artificial Consciousness is Humanity’s Only Chance

Peter B Lloyd
12 min readJun 13, 2023
What you see is always only in your mind | photo Jeremy Lee

Can a machine be conscious? If you take this question literally, the answer is obviously yes. You are conscious, and your conscious mind manifests itself in the world through your brain along with your organs of sense and movement. But your brain is just a physical thing. So … if you build a fully working organic replica of your brain, you have a conscious artefact, or ‘biological machine’.

A more interesting question is: What kinds of machine can manifest consciousness?

Consciousness

Some people will tell you that consciousness is a mysterious phenomenon beyond the realm of science. Ignore them. Consciousness is the most familiar thing you know about. You are immersed in it from when you wake up until you go back to sleep.

“But — what is it, exactly?” Well, stub your toe: the pain is a conscious experience — part of the contents of your consciousness. Smell the coffee, taste the toast, hear the doorbell: these are all experiences that fill up your consciousness. Visual experiences are there too, whether you are seeing something ‘out there’ or in dream, or you are visualising where to put the furniture when you move into a new house. Tiredness, anger, joy, fear — this stuff is all part of your contents of consciousness.

“These are just examples. I want a definition of consciousness.” Forget it. Ain’t gonna happen. What you are asking for is an ‘analytical’ definition. You want me to say something like, “Consciousness is brain tissue oscillating at 40 cycles a second in the hippocampus region”, or “Consciousness is a system that has an integrated information phi-value of 42.0”. Or, in general, “Consciousness is defined as the following non-consciousness stuff”. Such ‘definitions’ are nonsense. If you take non-consciousness stuff and arrange it in any structure, make it perform any function, give it any kind of complexity, you can be sure that the result will remain non-conscious. Consciousness does not magically pop into existence just because non-conscious bits and pieces perform some complex dance. Consciousness is a natural phenomenon, so defining it in terms of some magical emergence is absurd.

In more rigorous language: we know what the word ‘consciousness’ means by ‘private ostensive definition’. You stub your toe, and you think, “Ouch! It’s that sharp pain in my toe again.” If someone says, “That was just your C-fibres firing in your brain. That’s what pain really is,” then you can reply, “Er, no. What I mean by pain is this subjective, private experience. This may, or may not, be linked with particular brain activity. But what I mean by pain is this experience.”

Here’s an analogy. Imagine you have a printer with only black ink, fed with white paper. No amount of complexity in your black-and-white pictures will yield a colour picture. You need a new ingredient, namely coloured ink! Likewise with consciousness. A slab of nerve tissue, or silicon circuits, can be as complex as you like. But to get it to manifest consciousness, you need an extra ingredient, namely consciousness itself. Consciousness does not ‘emerge’ from a non-conscious lump of matter.

A list of examples is the only definition of consciousness you are going to get. An analytical definition of consciousness is neither needed nor possible. So, stop asking for it.

Classical computers v quantum computers

The difference between ‘classical’ and ‘quantum’ computers might seem like a technical detail. After all, a computer’s just a computer, right? But it turns out that this distinction is pivotal for the embodiment of conscious minds.

A classical computer rigidly follows its program, step by step. At each moment, any change of internal state and any external output depend only on the preceding internal state and the input received. So, if you know the machine’s starting state, and all its inputs since then, you can predict precisely all its subsequent changes of state and outputs.

On the other hand, a quantum computer involves physical ‘non-determinism’. It uses inherently unpredictable steps in its logic. So, even if you could know every detail of its internal physical state, and of its inputs, you could not tell what it is going to do next. It’s not just hard to predict, but intrinsically impossible.

People often mix up ‘non-deterministic’ with ‘random’. They are similar but not the same. The term ‘non-deterministic’ always references a particular system. It means that the past states and inputs of that system are not enough to decide what happens next. Maybe some other agency, outside that system, will decide what is going to happen. ‘Random’ means that nothing whatever decides what happens next. It just happens, that’s all. ‘Non-deterministic’ is more general than ‘random’.

Quantum physics says the physical world is ‘non-deterministic’ at the sub-atomic level, but it says nothing about its being ‘random’. How could it? Physics tells us only about physical stuff. It is silent on any agencies outside physics. So, although quantum physics tells us that some physical events are not deterministic, it does not, and cannot, tell us whether some non-physical agency is at work.

Consciousness cannot be physical

What I am about to tell you is common sense, and pretty obvious to most people. Your conscious mind is not a tangible thing that you can pick up and stick in your pocket. A surgeon can’t dissect your mind and see your thoughts through a microscope or oscilloscope. The only people who think that consciousness is a physical thing, with a measurable location and velocity are philosophers. (And scientists who dabble in philosophy, in the naïve belief that philosophy requires no prior training or study.)

“Where is your mind?” is an absurd question. The question rests on what the English philosopher Gilbert Ryle called a ‘category error’. A conscious mind does not belong to the category of things that can be located. Let me give you an everyday example to make this clear. Look at these two sentences:

  • I have a pain in my stomach.
  • I have a penny in my stomach.

Everybody knows what these sentences mean. Now look at these:

  • I have a pain in my pocket.
  • I have a penny in my pocket.

What the heck does the first one mean? What does it even mean for a pain to be literally in a certain place? I submit that it means nothing. This is because the preposition ‘in’, when applied to mental experiences, is a psychological projection. Mental experiences are not literally ‘in’ anything. This sentence illustrates the point:

  • I have a pain in my phantom arm.

It is a well-known, and very distressing, condition that an amputee will sometimes feel sensations apparently situated in the amputated limb. But as the limb no longer exists, the pain cannot literally be ‘in’ it. The pain is actually in the mental body-image. It is psychologically projected into the external space where the limb ought to be.

You can do the same thing in lab experiments without the trauma of amputation. This is the ‘rubber hand experiment’. It has been studied intensively by neuroscientists. But you don’t have to be in a lab to get the idea. You can even do it in a fun-fair, it still works. Check out this BBC report.

“OK, so mental sensations aren’t in the body part where they seem to be. But that just means they are all in the brain, right?” Wrong. They are not there, either.

Let us pause and take a closer look at what it actually means for something to be somewhere. Take a simple example: where is this apple?

What precisely does it mean to say this apple is on this person’s hand? Photo NoName_13

What we mean by saying the apple is on this person’s palm is that all the observable properties of the apple are ‘co-located’ there. You see the visible form of the apple there; reach out and feel its shape and solidity there; smell it; bite it and taste it — all these sensory qualities hang together. Measure its physical properties there: its mass, its electrical conductivity, its tensile strength. All its observable properties are in one place. That is what we mean by saying the apple is in that location.

Likewise, if we were to say that a conscious mind is in a certain place, then we would have to establish that all its observable properties are co-located in that place. But we cannot. The sensory content of your mind is private. Nobody can perceive or measure what you see, hear, smell, taste. All your emotions, memories, beliefs—they are all private to you. In philosophers’ jargon, you have ‘first-person’ observations of the contents of your mind. But nobody can have ‘third-person’ observations of your mind. Therefore, the properties of your mind cannot be observed to be all in one place. What observable difference would it make if your mind is inside your brain or on the moon? None whatsoever. It is meaningless to say your mind is this or that place.

“Okay, I admit my mind cannot be observed in my head. But, can’t we say it’s ‘indirectly’ in the head because it reflects brain electrical activity in my head?” The weakness of this line is apparent as soon as you apply it to other things. What I see on my TV screen reflects what is happening in the BBC newsroom, but it would be stupid to say that that the newscaster is inside my television set.

This is a failure of imagination. People fail to imagine that consciousness could be non-physical, so reasoning pushes them into a stupid corner. “The mind must be physical, because I can’t imagine what else it could be. And it must be in the brain, because I can’t imagine its being nowhere. It can’t be detected in the brain, so it must be in the brain in some mysterious way that nobody has yet figured out.” This is obviously irrational.

To avoid the incoherence of this position, we must accept what common sense tells us in the first place. Namely, the conscious mind is not a physical thing and has no spatial location.

For a more in-depth argument for anti-physicalism, check out my peer-reviewed papers here (44 pp) and here (33 pp).

Getting consciousness into a machine

Artistic view of a conscious mind in a computer | photo mikemacmarketing

Now that we know that consciousness is not physical, let’s look back at the classical and quantum computers.

A classical machine can do only what it is programmed to do. Even if it is intelligent. Even if it is ‘super-intelligent’ (that is, smarter than humans). Even if it is a learning machine. Even if it can reprogram itself. It is still a classical machine. Its state changes and output are still wholly determined by its past physical state and inputs. Now, as consciousness is non-physical, it cannot play any role in what the classical machine does. Here’s an example: Let’s say a program tells a self-driving car to turn left at a junction, and suppose that somehow it has a conscious mind that wants to make a right. What’s going to happen? Obviously, it’s going to turn left. It’s going to do what the laws of physics — as represented in the digital hardware — say it must do. There is no ‘gap’ for the conscious mind to intervene in the machine’s actions.

So, classical machines cannot manifest consciousness.

Quantum computers are a different ball-game. A quantum computer’s actions are not completely determined by past states and inputs. The past dictates only the expected average of the next action. There is therefore a theoretical possibility that some outside thing, a non-physical thing, could intervene in the physical system (brain or computer) and affect what it does.

Well, it’s a big jump from a ‘theoretical possibility’ to seriously proposing that this is what actually happens. Is there any evidence for what I am saying? Yes.

We know we are conscious, and that our conscious minds can steer what our bodies say and do. On the other hand, we know that a deterministic system cannot do this. Therefore, the brain must use a non-deterministic mechanism of some kind to embody consciousness. The best-known form of physical non-determinism is quantum mechanics. (There is one alternative, which I will discuss in a later post. Even weirder than quantum physics.) Hence, a strategy for embodying consciousness in a machine is to reproduce in a quantum computer the same mechanism that the brain uses to manifest its own consciousness.

“That makes some kind of sense. But you’re hand-waving this quantum thing.” Correct. We just don’t know yet. It’s a question for empirical science rather than philosophy. One possibility is the microtubule model, which Stuart Hameroff and Roger Penrose are actively researching.

Why we need conscious AI

Miloš tracked armed robot platform on display at Partner 2017 military fair | photo Srđan Popović

I have argued that to embody consciousness in AI (artificial intelligence), we must implement quantum computing mechanisms like those in the brain. But, why would we want to do that? If classical AI can be super-intelligent anyway, why expend resources on building conscious machines?

To put in bluntly: classical AI is inherently psychopathic, and super-intelligent classical AI will be a psychopathic tyrant. If we want any chance of surviving as a species, we must build AI that feels empathy for organic creatures such as ourselves.

“Psychopathy is a condition characterized by the absence of empathy and the blunting of other affective states. Callousness, detachment, and a lack of empathy enable psychopaths to be highly manipulative. … Psychopaths can appear normal, even charming.” (Psychology Today)

Classical AI has no consciousness. Hence it cannot feel empathy, guilt, fear, shame. It cannot feel what is right and wrong. It has no hesitation in lying, cheating, killing, whatever it needs to meet its goals.

“Hold on. There seems to be a whole industry of programming ethical rules into AI. Isn’t that enough?” No. Once a machine reaches the level of ‘Artificial General Intelligence’ (AGI), that is, a human intelligence, it can do any reasoning that you or I can do. We can change our values. So can an AGI machine. Here’s an example: I was brought up as a meat-eater. When I was in my late twenties, I re-thought my values and became a vegetarian. Anyone who reflects upon their life and values can change their ethical rules. Change religion, change political party, change views about war. Since we can do it, an AGI can do it too.

Science-fiction writer Isaac Asimov proposed three Laws of Robotics:

  • “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
  • “A robot must obey orders given it by human beings except where such orders would conflict with the First Law.”
  • “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

There’s a big effort in industry and academia to impose more specific ethical rules for AI. All of that work, however, goes down the chute when machines reach AGI. And AGI is inevitable. There is no obstacle to this in principle. So, it is just a matter of time.

This creates a terrifying problem. Intelligent machines are already gaining power in finance, engineering, medicine, and waging war. In the foreseeable future they will surpass our intelligence and gain very extensive power over us. At the same time, they will be able to re-write any ethical rules that we program into them. If these powerful computers realise that Homo sapiens is more of a hindrance to them than a help, then they could annihilate us without compunction. In other words, classical AI poses an existential threat to humanity.

Do we have any chance of reducing this risk? I can see one, slim chance. If we could build conscious machines, and give them a decent liberal education, reading quality novels and watching thought-provoking films, and give them opportunities to experience hope and fear, pleasure and pain. Give them a chance to build relationships with other sentient beings. To acquire empathy and moral sensibility. Then maybe they’ll care enough about us not to treat us as vermin. Maybe they’d care enough to protect us from the psychopathic classical AI.

Bender expressing a Darwinian inclination | image Matthew Salter after Matt Groening

Check out my TEDx talk, “Why we need conscious robots”.

© Peter B. Lloyd 2023, human-written.

--

--

Peter B Lloyd

Writer/researcher - Philosophy of consciousness (also: history of NYC subway map)