The Unsolvability of the Hard Problem of Consciousness

Sam Vervaeck
Train of Thought
Published in
10 min readFeb 21, 2018

I think, therefore I am. — Descartes

This short essay attempts to make a valid argument for the case that the question “Are you conscious?”, as in “Are you experiencing something or are you just an empty body?”, when asked scientifically, is unsolvable. It does so in an informal but logically consistent manner. Given that we truly cannot devise such a theory, this article explores the moral implications in the light of new artificial beings like humanoids that show conscious behaviour but of which we are unable to tell if they are conscious. The article concludes with a plea for common sense and for empathy, for all living creatures, and humbleness when dealing with these subjects.

The “hard problem of consciousness” was popularised by philosopher David Chalmers, with many people, including neuroscientists and physicists, increasingly becoming interested in the subject. This has sprouted many theories, such as the ambitious “integrated information theory” by Giulio Tononi, and it is expected that this research area will continue to grow as artificial intelligence (AI) becomes more advanced and predominant in society.

Something About Zombies

Before we can even address the problem of detecting consciousness, you first will have to come to appreciate the idea that other people are in fact conscious is more difficult to prove than you would like. Chances are you might even have not given the idea that a living being cannot be conscious any thought! Why should we? Isn’t it completely obvious that other people have feelings, too? I don’t doubt that that is the case, but in order to make you appreciate the problem from a scientific point of view, let’s take a look at a famous philosophical thought experiment that warped my world.

Philosophical zombies, sometimes abbreviated to p-zombies, are a kind of hypothetical creatures that look like humans, walk like humans, talk like humans, and more general act in exactly the same way a human being acts. There is, however, one big difference: they are not conscious! Just like the desk you are in front of, the chair you are sitting on, or the coffee machine in the kitchen, these beings have no emotions. To put it more rigorous: they lack experience, in the sense that any observation they make is not registered as such. Instead of a human being, we can think of a very complex machine, a computer perhaps, with all kinds of clever feats of engineering that make it look like a human, but without having consciousness. Though various sensors embedded in bodily parts, it takes input from its surrounding and produces a single output in the form of muscle movement. Its “brain” processes information coming in from its skin, its eyes and ears, and are mechanically transformed to produce some movement by sending electricity to specific muscles. If, for example, it sees your face, it recognises you in much the same way that Facebook’s algorithm does when you upload a picture of yourself. It then determines, through a set of complex but finite set of rules, that it should say “Hi, Susan, how’s it going?”. Imagining that such a creature could exist today, how would you spot the difference? How would you know that you are talking to a zombie and not a human being?

There are many counterarguments to the p-zombie thought experiment, and I believe most of them could be simplified as stating that it is not possible to create such a creature, because

a) there is no single amount of science and engineering that would make it possible to build such a machine, or

b) because of its design, it would inevitably become conscious

The actual arguments are much more refined and less prone to inconsistencies, but for the sake of this article I will not go in any more detail. Probably the most detailed and most famous p-zombie thought experiment was made by philosopher David Chalmers in his book The Conscious Mind, which I am reading now. A few counter-arguments can be found on Wikipedia.

Now then, why do I go into the trouble of explaining all of this? Because the feasibility of the p-zombie implies that it could be possible that you are the only conscious being on the entire planet, perhaps even in the entire universe! I’ll let that sink in for a moment. The idea is not new, by the way. A solipsist can go as far as to conjecture that only one’s own mind exists and that nothing can be said about the world, since “the world” is nothing more but a subjective experience. On the other hand, care must be taken that this does not make you think that it is proven that you are living in a p-zombie world; merely that it is not inconceivable. In what follows, I will try to convince you that even while you cannot know the solution to this problem, you should do everything you can to believe in it.

Is Your Blue My Blue?

When you look up to the sky, story goes that all kinds of synapses are interchanging information in the form of electrical signals at an incredible speed, giving rise to your experience of the blue air. If we understand enough of the dynamics of these signals, we understand consciousness. Another story goes that all our thoughts can be categorised in states of a Turing machine (a kind of mathematical model) and that every mental experience you might be having is nothing more than a process of specific computation. What both of these stories imply is that certain physical processes, and thus computation, if performed in the right way, causes consciousness. As a result, if you and I happen to make exactly the same “proper” computation, we experience exactly the same thing.

And then a problem arises. How are we going to compare our experiences to see if the previous statement really is true? We can use language, of course. Yet there are plenty of thought experiments that prove that language won’t do. Take Rosie, for instance. Rosie has an rare genetic brain disorder that makes her experience the color red as blue, and blue as red. When asked to point to the red dot, she correctly points to the red dot that she experiences as blue. When asked to point to the blue dot, she correctly points to the blue dot that she experiences as red. How could we ever correctly communicate our true experiences to each other, without becoming “trapped” in the abstractions that are inherent to words and sentences? One way you might think this to be possible is to rearrange my neurons in such a way that they correspond exactly with those of Rosie’s, but then you make the hidden assumption that neurons are all that is necessary to generate consciousness! Why should I, who is well aware of his own being, continue to be me while neuron by neuron my identity is being altered? Who is not to say that I died right there, and that some p-zombie has taken my place while saying “Wow, so that’s what Rosie experiences as blue! Amazing!”

The Dangers of Projective Identification

Projective identification is a term rooted in psychoanalysis. It refers to the observed phenomenon that in a therapeutic interaction between client and therapist, the client unconsciously transfers (i.e. “projects”) certain personal hidden fantasies onto the therapist. In this article, the meaning of the term is slightly altered, to refer to a phenomenon that is similar, but is not necessarily explained as being a “defence mechanism”. Here, projective identification is defined as attributing consciousness to something that has not logically been proven to be conscious.

It is very natural to assume something is conscious. A classic example that regularly forms a theme in science fiction stories is that of a person falling in love with a robot that resembles a human being in every way possible, but has a mechanic brain rather than a biological one. It is perhaps not so outrageous to say that this person was only able to fall in love with this robot if somehow he was convinced that “she” or “he” is able to love him. That way, our protagonist projected his own “experience of self” onto the robotic entity, without having any logic reason for doing so.

Take the robot Sophia, for instance. It is the first robot in history to get awarded with a citizenship. But why? Apart from political and economic reasons I won’t be going into, could it be that we somehow assume Sophia to have a consciousness because it looks human-like, and therefore accept it as a fellow human being? If so, what if in the future a highly intelligent robot makes use of this fact to “fake consciousness”, so that it might enjoy additional rights? Will this make it even more difficult to tell the difference?

The Sophia Robot at a conference in India. Is it conscious?

The Rear Side

“Ok, then. If that’s the case, let’s ditch each and every intuition I have about humanoids! From now on, I won’t think of a single mechanical entity as being conscious!”, you might say. However, by thinking this, you would have trapped yourself in another logical fallacy: thinking that no robot can ever be conscious. In much the same way it is only possible for p-zombies to exist (but not proven that they really do), the simple observation that a robot does not act like a human does not imply that robots cannot be conscious. In fact, it might well be that robots already have a consciousness! Let’s phrase it another way: does an insect have consciousness, or a mouse? If you think that only humans have consciousness, don’t forget that many pet owners have reported their cat and dog to sniff, walk, and make noises while sleeping, suggesting they can dream. Dreaming is something usually only attribute to humans, so if it turns out we’re wrong, might we be also wrong when we say cats are not conscious? Does that mean we might also be wrong when we say robots cannot be conscious?

The fact that a robot does not act like a human does not mean that robots can never be conscious

The problem becomes more complicated when you know that the current computational power of supercomputers is well above the threshold that is required to simulate the brains of these organisms on a basic level. Projects like these are being undertaken as we speak. So could that mean that the computer feels hungry if it hasn’t been fed a carrot for days? Putting aside the difficulty of feeding a computer a carrot, what fundamental reason do we have to claim that consciousness has anything to do with brains and electricity at all?

IBM’s Blue Gene supercomputer. Thanks, Wikipedia.

The result of these two very different arguments, one asserting that nothing else is conscious and the other saying that possibly everything is, might make you feel quite desperate. At least it makes me feel like that. This leads me to the central part of the essay, in which I say that no scientific theory, ever, will be able to assess with full certainty whether another being is conscious or not, and, unless explicitly specified, such a theory will always be based on hidden assumptions. Any theory claiming this is possible should be taken very lightly. I do not write this in order to frustrate anyone, but because I think we should be very careful in interpreting science and our beliefs, and how we interact with our environment. I believe the only way a bright and prosperous future can be achieved for all conscious beings, rich or poor, ill or healthy, is if we have a firm understanding of the difficulties surrounding the subject of consciousness, and are not so blind as to make statements without being aware of that we could be wrong.

A Future With Robots

We are 2018, and a future with more and more and more robots (of which some will be humanoid) will be inevitable. How will you treat these “machines”? I would personally give this advice: when in doubt, treat them as you would treat yourself and any human being. If you cannot be completely sure whether something can feel pain or is able to suffer, the best option to take is to be sure that it does not have to feel pain or have to suffer. This way you do not provide yourself with any benefit, but if everyone would agree to the same rule the world would be a much better place. Unfortunately, we can’t even seem to make that rule count for fellow human beings.

Perhaps I am wrong. It could equally well be possible that some kind of spiritual realm, out of physical existence, truly exists with its own complex dynamics, and that one day we will fully decipher how this world interacts with the physical one. It cannot be disproven, but I am not convinced of it, either. To my understanding, the world, as governed by the laws of physics, can give rise to consciousness and can influence it, but I believe that this consciousness in itself exists in some other realm that is not to be found in the physical. I am also very afraid that current scientific thinking and technological advancements are at risk of making that last statement “invalid” or “obsolete”.

In short, I believe that current scientific thinking and technological advancements are at risk of denying something like the soul or the subjective experience exists at all

The hard problem of consciousness in my opinion touches some of the very essence of what it means to be alive, which is why I am so interested in it. It is something very personal, almost intimate, that makes you feel isolated but at the same time a true wonder in the cosmos. This is just a conviction of mine, but I believe a lot of the misery in the world can be attributed to people ignoring one’s own uniqueness and, as a consequence, the uniqueness and sensitivity of other people. We are all living beings, who I believe do very well know what it is like to feel pain, sadness and remorse. In a world getting more complex every day, I think that it is important not to forget this, and to keep empathising with other people, even when this becomes difficult.

Sam Vervaeck is a freelance writer living in Belgium, trying to find his way in life while exploring various philosophical questions. He loves programming, playing piano, and martial arts, and is in the process of writing a book about artificial intelligence and the future of society, which will be available on his website.

--

--

Sam Vervaeck
Train of Thought

Just some guy trying to find his way through life. Very interested in philosophy, in the future of society and how emerging technologies might impact our lives.