‘I’m sorry. I can’t do that, Dave’: What the Movies Don’t Tell Us About AI

Dave Evans
10 min readJul 31, 2017

--

Movies are full of misconceptions, right? Cars automatically burst in a ball of flames when they crash. In space, no-one can hear you scream, but, apparently, you can hear explosions.

But if you thought that’s bad, wait ’til you see the liberties they take with artificial intelligence.

Working in the training technology sector, I’m probably hypersensitive to it; to me, AI and, to a greater extent, automation is functional and useful. It helps eliminate all those clunky manual processes and frees up time for training managers to concentrate on course development and growing their business. And you can see applications of automated machines in just about every other industry, too, from retail to finance.

But for Hollywood, automated technologies are the ultimate ‘Big Bad’. I don’t know why. Maybe it’s because the movie-going population don’t know or care about the finer details; maybe a robot once slept with Samuel Goldwyn’s wife.

So, let’s set the record straight here…

AI is just not that smart

One of the defining hallmarks of cinematic AI is just how smart they are. We schmuck humans are forever lagging three steps behind the computational brains of machines like 2001’s HAL or C-3PO, who — in case he forgot to mention it… again — can speak over 6 million languages.

Trouble is, that’s not really where we’re at right now. In fact, it’s unlikely that we’ll reach the point of advanced AI anytime soon. Programming machine intelligence is difficult enough, just for the basic tasks they can do right now. It’s already taken years, just to get Cortana, Alexa or Siri to understand that you want to call a specific person. According to the Future of Life Institute:

‘Some experts think human-level or even superhuman AI could be developed in a matter of a couple decades, while some don’t think anyone will ever accomplish this feat.’

One cheerleader of the superhuman AI is Google’s director of engineering, futurist Ray Kurzweil, who told PBS he estimates that:

‘By 2029, they will actually read at human levels and understand language and be able to communicate at human levels, but then do so at a vast scale… By the 2030s, they will literally go inside our bodies and inside our brains.’

The point is, we’re not there. Yet.

AI doesn’t always have a one-track mind

‘Bring back life form. Priority One. All other considerations secondary. Crew expendable.’ That’s Ash’s single goal in Alien. In 2001: A Space Odyssey, super-AI schizophrenic HAL has a secret directive to investigate Jupiter. Arnie must kill Sarah Connor in The Terminator, at any cost. Nothing else matters.

Which isn’t exactly accurate.

See, out in the real world, there’s more than one type of AI, and, in this respect, what both Ash and HAL are exhibiting is ANI — Artificial Narrow Intelligence. That’s the AI that’s very, very good at performing a single task (like, say, manipulating the crew of the Nostromo to allow a Xenomorph onboard the spaceship).

Then there’s Artificial General Intelligence (AGI), which is still more or less a theory right now. This is sometimes referred to as human-level AI since it could perform any task a human can — from the ability to reason to learning from experience.

And the next step up is ASI, or Artificial Super-Intelligence, which is the absolute pinnacle of AI; superior to humans and our pitiful brains and defined by Nick Bostrom, Oxford philosopher and AI-phile, as ‘an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.’

Movie machines typically display the characteristics of ASI, while adhering to the focused ANI principles. This probably occurs for two reasons: ANI is the only type of AI we’ve successfully created, so it’s familiar and easily understandable, and every film character needs motivation to drive the film forward — simple motivation that artificial narrow intelligence effortlessly supplies.

AI won’t kill us all

‘I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.’

That’s according to technology entrepreneur Elon Musk — and his is a view that’s shared by many, including Stephen Hawking, who told the BBC that, ‘the development of full artificial intelligence could spell the end of the human race.’

On the other hand, we have leaders like Facebook founder Mark Zuckerberg stating that ‘if you think about safety, and health, and keeping people safe, AI is already helping us basically diagnose disease better… One of the top causes of death for people is car accidents still and if you can eliminate that with AI, that is going to be just a dramatic improvement.’

So, does real-life artificial intelligence take its cues from The Terminator’s wicked Skynet AI, poised to destroy us all and take over the world?

Well, it hasn’t happened yet. And, given the amount of time and work that goes into programming advanced AI, it’s unlikely that a ‘kill-all-humans.exe’ is just going to slip unnoticed through the (Sky)net. And that’s before we even get on to the Principles of AI, laid out by Microsoft, or Google’s own AI Safety Rules — which are the real-world versions of Asimov’s Three Laws of Robotics. You know…

· A robot may not injure a human being or, through inaction, allow a human being to come to harm.

· A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

· A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

That’s not to say that AI isn’t without its pitfalls, just like any technology. But, according to Professor Mark Bishop, who teaches cognitive computing at Goldsmiths, University of London, that threat is more likely to come from our use of artificial intelligence, rather than some self-sustaining, self-replicating AI bent on world domination. He said:

‘I am particularly concerned by the potential military deployment of robotic weapons systems — systems that can take a decision to militarily engage without human intervention — precisely because current AI is not very good and can all too easily force situations to escalate with potentially terrifying consequences.’

AI can build themselves, but they can’t build an army

Inextricably linked to the issue of AI world domination is the fear that machines will one day learn to make themselves, essentially removing humans from the development process entirely. In the movies, it’s a no-brainer for machines to create an AI-powered army and position themselves as our robot overlords.

Well, the bad news for the AI-phobic is that yes, artificial intelligence has now reached a point where it can learn to code. But it’s not quite as clear cut as all that (by which I mean, it doesn’t herald the end of the world and its motives are not so malevolent).

DeepCoder, a machine learning system developed by Microsoft and the University of Cambridge, is capable of writing its own code. But it works by scanning existing source codes and piecing it all together to create new code, and however impressive that is, it’s worth noting that DeepCoder can only work with about five lines of code at present. So, you can return from the bunker — for now.

Microsoft’s Marc Brockschmidt, one of DeepCoder’s creators, explained that this wasn’t about creating AI that could build itself, but that the program ‘could allow non-coders to simply describe an idea for a program and let the system build it.’

AI don’t really have personalities

Ever notice how, in films, the AI has a distinct personality? Machines like 2001’s HAL or Ash in Alien are driven by a desire to complete their missions at any cost (and they can be pretty creepy, to boot). Then you’ve got the wide-eyed naivety of David in Spielberg’s A.I. or even I, Robot’s Sonny. But films need to give these characters personality because, well, it’s a film. We’re supposed to engage with these characters.

Right now, AI can only simulate a personality. Take Cortana, for instance. Ask her a goofy question and you’ll get a goofy answer — but this is a pre-programmed illusion; remove your interaction or ask a question that doesn’t have a ready-made response and she stops being a sarcastic joker. Which isn’t exactly how personalities, constant and ever-evolving, actually work.

What is a personality anyway? A series of characteristics built up over the course of your life experiences. Emotions play a part in that, which changes the way we respond to events and interactions — that’s not particularly helpful for machines designed to perform tasks by rote. We want them to appear human, without all that human irrationality. So, it’s about striking the right balance.

One potential solution to this is already underway, with scientists attempting to reverse-engineer the brain in order to create a digital version of it; of course, you could try to upload memories onto the AI, as they did in Blade Runner — which isn’t a million miles away from the theory of ‘whole brain emulation’ (where computers scan your brain, slice by slice).

All that’s a long way off yet. Two Google engineers, Fernanda Viegas and Martin Wattenberg, attempted to figure out if certain AI had a capacity for personality. Or, as they put it:

‘Couldn’t [AI machines] have feelings, personalities, even psychological idiosyncrasies? Could they have their own strange personalities, different from any human? To understand the subconscious thoughts of the new mechanical brains, we turned to an old standby. We made them stare at inkblots and tell us what they saw.’

Essentially, then, we’re at the cusp of understanding machine personalities; we’re not even sure if different AI have varying personalities, let alone reached the point where we can program them. We’re a long way off from Blade Runner’s ‘more human than human’ replicants.

AI takes a whole lot of time to develop

This is a big one: In movies, coding a sophisticated AI can be done single-handedly over the course of a coffee break. You often see it — the film’s hero or villain locked in an entirely unsuitable room, hunched over a bank of monitors that spill nonsensical code across their screens. Lithe fingers dance across the keyboard like a pianist in his prime. He hits run and… his AI works, often even more powerfully than expected. No debugging, no testing, no problem.

If that were true, we’d probably be at least 100 years ahead of where we’re at. In fact, we probably wouldn’t even be having this conversation; we’d be questioning why even coffee mugs and pencils have AI capabilities (with the most likely answer being ‘because they can’).

Instead, a lone developer will likely spend years just programming something like a basic chat-bot. Perhaps the most widely known AI machine, IBM’s Watson, took 40 researchers four years to complete (and let’s not even focus on the cost, which is likely to be more millions than you or I will ever see in our lives).

Despite what Hollywood would have us believe, we’ve not quite hit the true age of AI — a time when the technology we’re seeing emerge today will be considered laughably basic. But what we can see, as the technology grows, is the potential. In the training industry, the potential for artificial intelligence is enormous, providing new avenues for learning. Imagine machines that understand how best you learn, and at what pace, and adapt their training accordingly. Visual and audio learners will see more images and hear more sound throughout their training, while those who prefer kinaesthetic learning will be challenged with more practical tasks. The potential for AI in training, as with every industry, is limitless.

Right now, though, we’re only just on the cusp of this sci-fi-influenced world. it would be more apt to say that, right now, we’re in the automation age, with workplaces beginning to fill with smart software and hardware that eliminates the repetitive and the routine. Far from sowing the seeds of our own destruction, right now we’re all hell-bent on creating systems that make our lives easier.

…But that wouldn’t make a very exciting action movie, would it?

If you enjoyed this article, let me know below or follow the accessplanit blog for regular posts on all things training and technology.

--

--

Dave Evans

Managing Director of @accessplanit | Training and technology enthusiast | Die-hard Preston North-End supporter