Do androids dream of being human?

azeem
4 min readFeb 8, 2015

--

Credit: Universal

With some eagerness I went to watch “Ex Machina”, a film about people and our likely uneasy relationship with artificial intelligence. This topic is one biggest subjects du jour — and something that any human with spare cycles should be getting to grips with. The arrival of artificial intelligence that kinda, sorta works is going to have a big impact on our species. The film has been well-reviewed (by the likes of my alma mater).

My take: they’ve got it completely wrong.

There is no question that AI is an important issue. Philosophers, entrepreneurs, inventors, scientists and universities are all grappling with the issue. The position of these esteemed thinkers is pretty similar — AI could pose an existential threat to humanity.

We will have to grapple with these issues. And it is those issues the I hoped Ex Machina would stare down head on. But apart from some beautiful scenery and sexy robots, the film doesn’t do any of that. Rather it tackles a much smaller issue that is this: will our AI be a little Pinnochio wanting to be human? And will that desire push it to vulnerability and malignancy like Kubrick’s HAL from 2001: A Space Odyssey?

Sorry Dave, I can’t do that

But it’s 47 years since 2001 and our understanding of how AI may develop has moved on.

What Ex Machina lacks is a sense of meta-system of technologies and their interplay that will come together as we head towards a strong, generalized AI. Instead, Ex Machina seems to linearly extrapolate from today’s technologies to tomorrow’s. Ava, the AI robot, the is recognizable as Siri with a great body and an old skool desire to actually be a human. In fact, Ava appears to confront the same questions that DARYL, the Data Analysing Robot Youth Lifeform, of the 1985 film schlocker, asked. Like Ava, Daryl wanted humanity.

Like Pinocchio and Ava, DARYL wanted to be a real boy

But it feels like a huge assumption that any generalized, strong AI would want to be like us, be human. This anthropocentric view of superiority is surely one Copernicus away from being discarded.

Indeed, it seems very unlikely that any generalized AI that seems as naive as Ava will remain so for long. These AI are learning systems after all. And they are deep learning systems able to learn and process information at a rate that far exceeds any human. For a live example look at the work of computer scientist Demis Hassabis of Deepmind. (Google acquired his pre-product, pre-revenue company for GBP400m in 2015.)

In a demo last year Hassabis showed how, in under 2 hours, his basic learning system was able to find the most optimal strategy for the game Breakout. Watch the video. It’s mind bending. In less than two hours of constant practice, the algorithm went from naive to a mastery beyond human comprehension. The same algorithm mastered, well leap-frogged human mastery, in a range of really distinct games.

Now this is a demo from 2014 and it is a generalised learning system, state of the art, that can adapt it’s strategies extremely rapidly to completely out-perform a human. In fact it transcends what it is to be a human playing a game like Breakout.

The challenge in trying to represent strong AI is thus not that it might try to be malevolent (it might be). But is it really going to use our own tools, techniques and strategies to beat us? Isn’t it going to transcend us, the way Hassabis’ Breakout algorithm just takes that game to a level a human can’t comprehend?

A second problem is that the AI in Ex Machina contents itself with living within our human frame of reference. Why does the AI carry such a human identity (a name, a personality, a desire, like Pinnochio, to be a real person)? Why would an AI have a personal identity that is remotely congruent to a human personal identity? Why can we even recognise even glimmers of its intelligence?

Rather that Ex Machina, this film seems to be Intra Homo Sapiens, that is locked to a human frame of reference where the AI aspires in some sense towards a human sense of humanity. Why an AI would seek to frame itself in human terms is a truly fascinating question or more interesting than anything asked in the film.

Neither of the protaganists, the clearly brilliant Caleb and the other bloke, seem to ask or care about that question. But that really is the question.

How did they manage to build a generalised, learning AI that is somehow constrained in terms that are understandable to humans? That seems to be an impossibility. Can you really get your generalised AI, one which can interact ‘as a human’ without getting one that will learn, adapt and change faster than we can imagine? And ultimately achieve higher heights than we can conceive?

That seems unlikely. (For a great discussion on this please read Tim Urban on the subject.)

Isn’t the real question not whether the AI is Pinnochio or DARYL or HAL but whether we are flatlanders living with our own simplistic representations of self, identity and experience about to create higher dimensional entities with an entirely different frame of reference?

Save two hours and avoid this film. Pick up Nick Bostrom’s Super Intelligence instead.

--

--

azeem

Entrepreneur, inventor and creator — curator of The Exponential View