Debating Artificial Intelligence

Film4's Catherine Bray talks to science writer Dr Adam Rutherford and cognitive roboticist Professor Murray Shanahan have to say about artificial intelligence, a hot topic during the release of Alex Garland’s critically acclaimed Ex_Machina.

Catherine Bray: So, should an artificially intelligent conscious entity have the same rights as a human being?

Adam Rutherford: If an artificial consciousness and intelligence displays all of the same characteristics as a human then I see no reason why it should not have the same rights as a human. Just because its hardware is different, if its behaviour is indistinguishable, then I think it affords the same rights.

Murray Shanahan: The philosopher Jeremy Bentham’s point about animals was: can they suffer? If we really did build something that was capable of suffering, then of course we would have to afford it the same respect as we afford any living thing.

Adam: A different question would be: if we could create an intelligence that didn’t look like a human but displayed consciousness that was indistinguishable from a human, does that afford it the same rights as a human even though it doesn’t look like one? That’s a slightly more difficult question because we’re very anthropocentric, understandably, because we are anthropoids. My guess is, that this is a question we’ll have to face before we face the question of Ava-like beings, because I think machine intelligence will be as sophisticated as us, in consciousness terms, before it looks like Ava.

Murray: A lot of people seem to think that we as AI scientists know how this is all going to unfold over the next 50, 100 years, and we really don’t. So if this is the question: will we have to build something that’s embodied for it to have human-level intelligence? Certainly a few years ago I would have said almost certainly yes. A lot of our intelligence derives from the fact that our brains are really there to enable us to move our bodies around in this complex world, and to manipulate the physical objects in this complex world, and interact with other beings who also inhabit the same world. So a lot of our intelligence is grounded in our embodiment. But maybe on the other hand it’s possible to acquire all the necessary information through machine-learning using the enormous number of videos we have on the internet these days.

“I think machine intelligence will be as sophisticated as us, in consciousness terms, before it looks like Ava.”

Adam: I have issues with the idea that we may be capable of in the future of downloading our consciousness, as if it were a separate entity from our physical being. It’s kind of a dualist view, which dates back to Plato. But it’s a trope that comes up in science fiction relatively often: the idea of brains in jars. Or of downloading your consciousness into another body or another physical entity. In order for that to be a realistic possibility you have to be able to separate consciousness from our physical selves. If you’re a materialist or a physicalist, that’s not something which you can adhere to as a principle.

Murray: But on the other hand, there is the possibility of so-called whole brain emulation, or scanning your brain very, very precisely and making a map of exactly every neurone, exactly where it is and what neurons are connected to what, down to the level of detail where you know even their electrical characteristics. And then building a copy of that, simulating all of that stuff in a computer, and then embodying the simulation in a sort of robot body…

Adam: The slightly trivial response to what Murray just said is that our physical stimulus, which results in the conscious experience is held within our brain but it is derived from our interaction in every sensory cell impressing into our brain. So if you took Murray’s brain at this time point and froze it at that level of resolution, including all of the electrical impulses, the question of whether it stored all that information as your experience, as your experience of however many years old you are, is one problem or question.

“The idea of downloading your consciousness into another body… in order for that to be a realistic possibility you have to be able to separate consciousness from our physical selves.”

Murray: Have you heard of deep hypothermic circulatory arrest? Where the body is cooled to a very, very low temperature, to the point where not only is the heart stopped, but there is no electrical activity left in the brain at all, none at all, which has enabled certain kinds of operations to be performed which wouldn’t otherwise be performed. There is a window which is getting longer of let’s say 30 minutes where you can perform an operation and then bring the person back again. It’s what they call cerebral silence, and then you bring the person back and their memories are still there of course, it’s all the same person. So that suggests that you don’t require ongoing electrical activity.

Adam: That’s true, I agree with that. I only exist within the physical realm, I don’t have any spiritual or supernatural principles that underlie this whole conversation. I’m absolutely a dead straight materialist when it comes to this stuff.

Murray: Can I just interrupt for one second? I think a very important intuition that Adam is getting at here is that many of those sorts of things that you see in science fiction really do suppose there’s some kind of separate thing that can somehow be siphoned off, whereas this scenario that I’m proposing is not siphoning off some separate thing and then putting it into some other body, it’s actually duplicating the brain very, very precisely. Whereas in Avatar, for example, the consciousness of the Sigourney Weaver character is somehow meant to become this spiritual thing that goes into the tree, and that doesn’t make sense to me.

Kind of a dualist view.
“In Avatar, for example, the consciousness of the Sigourney Weaver character is somehow meant to become this spiritual thing that goes into the tree, and that doesn’t make sense to me.”

Catherine: Do you agree with the film’s tagline, that “there is nothing more human than the will to survive”?

Adam: It’s quite clever. Because she’s not human, she’s a simulation of a human, and clothed is externally effectively indistinguishable from a human. During the course of the film, even knowing that she’s not human, Caleb buys into the idea that she behaves indistinguishably from a human. So the tagline actually works in that respect, because it is saying that, you know she’s not human, but she’s displaying characteristics which are indistinguishable from human behaviour.

Murray: Basically, it’s an answer to the very first question you asked. If we built an artificial consciousness would we have to afford it the same ethical status as a person? The answer is yes because we’re saying that she is so human-like that we have to count her as human.

Adam: This is going to sound weird and probably not human to you. I’m wearing a suit because I came from a funeral, I don’t normally dress like this. It was my godmother who died of cancer just before Christmas, and I was thinking about the unbelievably tenacious grip that organisms have on life, even when quite clearly she was gonna die. If you ever visit old people’s homes or intensive care units, I’m always incredibly impressed with biology’s tenacious grip on reproducing itself and surviving.

Catherine: Cancer is sort of the will to survive in a very pure form; it’s the cancer cells’ will to survive, to replicate.

Adam: Yes, if you can attribute desire to diseases, cancer’s desire is to reproduce itself. It changes, it mutates during the disease’s progression, which is to maximise its own survival. Ultimately it’s a futile job, because often cancers end up killing the thing that’s keeping them alive. Viruses are very, very clever at doing that, because what good, successful viruses do is use the host to reproduce themselves in great numbers, and as far as we know there are no living entities on Earth that do not have specific viruses. So where viruses fit on the evolutionary tree is not really understood, but they are a fundamental part of life. Not exactly living themselves, although that’s disputed, but they are a fundamental part of life. And again, if we can attribute desire to these things, their desire is to survive.

All he was trying to do was enact his program.
“Why does it have to be the case that HAL doesn’t want to be shut down. If you were building a HAL, you wouldn’t want to make it like that, right?”

Murray: An interesting question about Ava is why would Nathan have built her with this urge for self-preservation. I think there’s nothing conceptually or philosophically forcing the AI researcher of the future to make AI’s that have that particular human attribute.

Adam: All of the best science fiction, particularly in movies that involve robots, addresses exactly that point. Immediately you think of HAL in 2001.

Catherine: I always cry when HAL is shut down…

Adam: It’s deeply moving, isn’t it? All he was trying to do was enact his program, but he recognises that the position of the crew was not going to enable him to enact his program, so he gets shut down. And with the Nexus-6’s in Blade Runner, Tyrell has anticipated the thing that Murray just said, which is that they build in a short circuit into their lives so that they only live for four years.

Murray: But what I was talking about was the urge to survive. In Blade Runner, they do have the urge, but you don’t have to make something that has the urge. Well actually, HAL is a good example. Why would, why does it have to be the case that you have that scene at the end where HAL doesn’t want to be shut down. Why? And the thing is that, if you were building a HAL, you wouldn’t want to make it like that, right? The difference is with Ava, is that Nathan had very deliberately made a human-like being, the whole premise of the film is that Nathan has made the decision to make her human-like. Her whole cognitive makeup is human-like, so she would have that sense of self-preservation, but it’s still an engineering decision to do that.

Adam: I think these three are probably the three greatest examples of AI in cinema, and I do include Ava in that, I think she’s a truly astonishing creation, I think it’s a masterpiece, to be honest.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.