Make Man In Our Image: Through the Black Mirror

Neuromation
Neuromation
Published in
7 min readJan 15, 2018

--

A Recurring Theme

Warning: major spoilers for the fourth series of Black Mirror ahead. If you haven’t watched it, please stop reading this, go watch the series, then return. I’ll be waiting, I’m an imaginary being who has nothing better to do anyway…

…which is kind of the point.

I watched the fourth Black Mirror series over the holidays. As I watched one episode after another, it struck me that they all seem to be about the exact same thing. This is an overstatement, of course, but three out of six ain’t bad either:

  • in “USS Callister”, the antagonist creates virtual copies of living people and makes them the actors in his simulated universe, torturing them to submit if necessary;
  • in “Hang the DJ”, virtual copies of living people live through thousands of simulations to gather data for the matchmaking service on a dating app;
  • in “Black Museum”, the central showpiece of the museum is a virtual clone of an ex-convict who is put through electrocution over and over, with more clones in constant pain created every time.

Let’s add the “San Junipero” episode and especially the “White Christmas” special from earlier Black Mirror series to this list for good measure.

See the recurring theme? It appears that the Black Mirror creators have put their minds to one of the central problems of modern ethical philosophy: what do we do when we are able to create consciousnesses, probably in the form of virtual copies inside some sort of simulation? Will these virtual beings be people, ethically speaking? Can we do as we please with them?

Judging by the mood of the episodes, Black Mirror is firmly in the camp of those who believe that upon creating a virtual mind, there arises moral responsibilty, and “virtual people” do give rise to ethical imperatives. It does seem to be the obvious choice… doesn’t it?

The Hard Problem: Virtual What?

As soon as we try to consider the issue in slightly more detail, we run into insurmountable problems. The first problem is that with our current state of knowledge, it is extremely hard to define what consciousness is.

The problems of consciousness and first-person experience are still firmly in the realm of philosophy rather than science. Here I mean natural philosophy, a scientific way of reasoning about things that cannot be a subject of the scientific method yet. Ancient Greeks did natural philosophy, pondering the origins of all things and even arriving at the idea of elementary particles. However, as amazing as that insight was, the Greeks could not study elementary particles as modern physicists do, even if they did have the scientific method as we know it. They lacked the tools and even the proper set of notions to reason about these things. In the problem of consciousness and first-person experience, we are still very much at the level of ancient Greeks: nobody knows what it is and nobody has any idea how to get any closer to this knowledge.

Take the works of David Chalmers, a prominent philosopher in the field. He distinguishes between “easy” problems of consciousness, which could be studied scientifically even right now, and “the hard problem” (see, e.g., his seminal paper, “Facing Up to the Problem of Consciousness“). The hard problem is deceptively easy to formulate: what the hell is first-person experience? What is it that “I” am? How does this experience of “myself” arise from the firings of billions of neurons?

At first glance, this looks like a well-defined problem: first-person experience is, frankly, the only thing we can be sure of. The Cartesian doubt argument, exactly as presented by Descartes, is surprisingly relevant to sentient people simulated inside a virtual environment. The guy running the simulation is basically the evil demon of Descartes. If you entertain the possibility that you may be stuck in a simulation, the only thing you cannot doubt is your subjective first person experience.

On the other hand, first-person experience is also competely hidden from everyone else except yourself. Chalmers introduces the notion of a philosophical zombie: (imaginary) beings that look and behave exactly like humans but do not have any first-person experience. They are merely automata, “Chinese rooms“, so to speak, that produce responses matching those of a human being. Their presumed existence does not appear to lead to any logical contradiction. I wouldn’t know but I guess that’s how true psychopaths view others: as mechanical objects of manipulation devoid of subjective suffering.

I will not go into the philosophical details. But what we have already seen should suffice to plant the seed of doubt about virtual copies: why are we sure they have the same kind of first-person experience we do? If they are merely philosophical zombies and do not suffer subjectively, it appears perfectly ethical to do any kind of experiments on them. For that matter, why do you think I am not a zombie? Even if I was, I’d write the exact same words. And a virtual copy of me would be even less similar to you, it would run on completely different hardware — so how do we know it’s not a zombie?

Oh, and one more question for you: were you born this morning? Why not? You don’t have a continuous thread of consciousness connecting you to yesterday (assuming you went to sleep). Sure, you have the memories, but a virtual clone would have the exact same memories. How can you be sure?

Easier Problems: Emulations, Ethics, and Economics

We cannot hope to solve the hard problem of consciousness right now. We cannot even be sure it’s a meaningful problem. However, the existence of virtual “people” also raises more immediate questions.

The Age of Em, a recent book by an economist and futurist Robin Hanson, attempts to answer this question from the standpoint of economics. What is going to happen to the world economy if we discover a way to run emulated copies of people (exactly the setting of the “White Christmas” episode of Black Mirror)? What if we could copy Albert Einstein, Richard Feynman and Geoffrey Hinton a million times over?

Hanson pictures a future that appears to be rather bleak for the emulated people, or “ems”, as he calls them. Since copying costs of virtual people are negligible compared to raising a human being in flesh, competition between the ems will be fierce. They will become near-perfect economic entities — and as such, will be forced to always live at near-subsistence levels, all possible surplus captured immediately by other competing ems. But Hanson argues that the ems might not mind that: their psychology will adapt to their environment, as human psychology has done for millenia.

The real humans will be able to live a life of leisure off this booming market of ems… for a while. After all, there is no reason not to speed ems up as much as computational power allows, so their subjective time might run thousands of times faster compared to our human time (“White Christmas” again, I know), and their society might develop extremely quickly, with unpredictable consequences.

Hanson also tackles the “harder” problems of consciousness from a different angle. Suppose you had a way to easily copy your own mind. This opens up surprising possibilities: what if, instead of going to work tomorrow, you make a copy of yourself, make it do your work, and then terminate the copy, freeing the day for yourself. If you were an em you would be able to actually do it — but wouldn’t you be committing murder at the end of the day? This ties into what has been known for quite some time as the “teleportation problem”: if you are teleported atom by atom to a different place, Star Trek style, is it really you or have the real “you” been killed in the process, and the teleported is a completely new person with the same memories?

By the way, you don’t need to have full-scale brain emulations to have similar ethical problems. What if tomorrow a neural network passes the Turing test and in the process of doing so begs you not to switch it off, appearing genuinely terrified of dying? Is it OK to switch it off anyway?

Questions abound. Interesting questions, open questions, questions that we are not even sure how to formulate properly. I wanted to share the questions with you because I believe they are genuinely interesting, but I want to end with a word of caution. We have been talking about “virtual people”, “emulated minds”, and “neural networks passing the Turing test”. But so far, all of this is just like Black Mirror — just fiction, and not very plausible fiction at that. Despite the ever-growing avalanche of hype around artificial intelligence, there is no good reason to expect virtual minds and the singularity around the corner. But this is a topic for another day.

Sergey Nikolenko
Chief Research Officer, Neuromation

--

--