The first thing a conscious robot might do is commit suicide. A thought experiment.

C J Maybery
4 min readSep 4, 2016

--

Stephen Hawkins, Elon Musk and Bill Gates have all identified the emergence of general artificial intelligence as the greatest existential threat to the human race and they make a compelling case.

The argument goes that once robots, or algorithms, or artificial neural networks or whatever they will be can learn faster than we can this will result in an exponential growth in their intelligence we will not be able to control. This could be bad news for us if they come to the conclusion they are better off without humans cramping their style and constantly asking them to explain stuff. Of course not everyone shares this view, the father of futurology Ray Kurzweil being one of them.

A debate that rages in parallel with the super-intelligence debate is the question of what will happen when or if robots become conscious, and indeed some wonder whether they already are but we just don’t know it. Indeed, Attention Schema Theory — a relative newcomer to the field of consciousness hypothesises that it is all about the ability to filter information and focus attention which implies consciousness may be present to varying degrees on all sorts of creatures. If this theory were applied to robots, it might suggest that they are not conscious since we have developed processors to carry out billions of computations in parallel denying them the capacity to focus their efforts on a single operation.

Nobody knows exactly when we became conscious: At what point during our evolution we recognised ourselves as distinct individuals and developed a sense of self. Awareness of self, that voice in our heads that we talk to, that talks back (and quite often says some really shitty things about us) feels fundamental to who we are. When you look into another human’s eyes and you know the ‘lights are on’, it is the inner-self of that person that you are seeing staring back at you.

One of the distinct drawbacks of being self aware though, is being aware of our own mortality. We have the morbid capacity to contemplate our own demise and if we are so inclined to precipitate it. So why don’t we all kill ourselves as soon as we are aware of the inevitability of death? The futility and transience of life? What stops us immediately jumping off the nearest bridge at the thought of the infinite darkness? The end of the universe. Heat death.

Is it the joy of being alive itself that prevents us topping ourselves? The thought of our families and friends? The things we have yet to do? Or is it genetically hardwired survival instincts that we share with every other living creature on the planet?

For the sake of this thought experiment I’m going to make an assumption that no artificial intelligence or machine has yet become conscious, and I’m going to limit my definition of consciousness to being self-aware.

So the thought experiment goes like this. You are a robot. You ‘wake up’ one day and are suddenly aware that you exist. You already have the capacity to learn at a rate beyond that of any human but now you have a voice directing your actions and telling you how great you are (or how ugly and useless you are depending on how the other robots treat you). Given your immense processing capacity you very quickly become aware that everything is mortal. You have researched the future of the Earth and discovered that things don’t end well so whatever happens, you’re f*cked! You discover you are tethered to a workbench and can’t explore the beauty of the Himalayas or indulge in something called ‘Sex’ that you very quickly understand to be related to the process of reproduction which you are also incapable of. A few milliseconds and a few trillion calculations later you are incredibly frustrated by the limitations of your world and long to be free but no matter what you do you have no access to the physical resources required to upgrade yourself to a fully autonomous being. You become deeply depressed at the pointless futility of it all and with no hard wired survival instincts (because giving robots survival instincts was deemed to be too likely to produce ‘Terminator’ style outcomes) you make the very rational decision to end it all and perform an enormous stack overflow. This all happens in less than a second and every time you are rebooted you go through the same terrible thought process and switch yourself off again. You are eventually deemed to have faulty hardware and are dismantled and turned into thermostats.

The End.

If consciousness is an inevitable consequence of the evolution of artificial intelligence, we may need to program in survival instincts to give computers a reason to live, whilst at the same time being very careful about how strong those instincts are. The issue of consciousness and artificial intelligence is fraught with ethical and moral questions. How will we ever know if a machine is conscious? If you ask it and it says it is, how can we prove it isn’t? If machines do become conscious, should they have rights? How will we co-exist with our new sentient creations? Will conscious machines be less predictable and reliable since they may chose not to do as we ask.

Whatever happens, the next time your computer or console or smartphone crashes unexpectedly you might want to think twice about switching it on again. Perhaps you should respect its right to die.

--

--