Sci-fi movies, Artifical Intelligence and Time

Gurlivleen Grewal
Signal, Noise & Startups
8 min readJun 14, 2015

Lately, AI has become synonymous with future in my mind. Many of my discussions with my flat-mate feature quotes like — it will be with AI, wait till AI, or by then AI would be here. AI is not some far distant future either — we should get there as early as 2025 to 2060. All this new found interest is thanks to Tim Urban, who writes at Wait but Why. Tim has put together one of the most insightful pieces on AI, our place in the universe, and many more. So if you haven’t read his posts please do here.

Ex Machina and before that a movie Her are among the best two movies on AI. Coming to Ex Machina, even though the movie is scientifically accurate, for the most part, few crucial aspects are not. Spoiler Alert for Ex Machina and Her, plot points discussed below.

Unadulterated brilliant Sci-fi. Doesn’t dumb down to the audience and doesn’t have filler action scenes.

This post features two primary fallacies regarding AI in sci-fi movies.

1.

The future with intelligent machines would be linearly better than our present. This results from the fallacy of linear thinking — underestimating the future on the basis of our past and present progress. In the movies, this is done for simplicity but in most cases it is just the lack of understanding of leaps in technology and their cascading effects.

As an example imagine if we bring a person from the 1500s to early 1900s via time travel. He would be shocked with the advent of motorized engines, the political system, the warfare — just about everything. Mind blown, might even die of shock.

Let’s say now this person brings someone living 400 years earlier to 1500s just to get the kick out of their reaction. That time-travelling person should be fine — while things 400 years apart would be different, but they wouldn’t qualify for being mind-blown. To achieve that profound effect, we might have to get someone all the way back from when we were hunter-gatherers.

So the level of progress needed to qualify for mind-blown has consistently reduced from thousands of years to a few centuries and now to a few decades. This is accelerating returns i.e. progress begets more progress and at an ever increasing pace. The quantum of progress we made in between 2000–15 would be achieved in next six years. Today our minds would be blown by the progress made by 2030 akin to the effect of bringing someone from early 1900s to 2015.

Linear thinking doesn’t take into account accelerating returns. This is not just the problem with Hollywood. Businesses go bust because of the lack of vision. Even in 1990s most experts failed to predict the potential of Intenet, mobile phones.

Multiple sci-fi movies have potholes because of this linear thinking. In Interstellar, while humanity is capable enough to make robots that can understand and solve complex problems, tasks for which they couldn’t possibly have been previously programmed; they still can’t help in taking us to space or save us from the inexplicable afforestation, or the crop disease. One can’t happen without the other.

Or in Star Trek, while we are capable of interstellar travel — there is hardly any AI — just super-smart humans still doing stuff like it is 2010. Only if we achieve AI, we would be able to solve most of our really complex problems. If we can’t, then we also can’t achieve the Utopia that is the basis of Star Trek and we sure as hell won’t have interstellar travel. And if we have AI, the human life would get altered dramatically — we won’t be doing manual labor, on earth or on some distant planet, no poverty. Human minds would be connected and wired into one singnular connectedness, or things beyond what we can even imagine.

Another major plot hole in this theme is Earth being invaded by organic aliens. If we were to be ever invaded it would be inorganic AI. To surmount the challenge of interstellar travel, AI is a must. If aliens are evolved enough for interstellar travel they ought to have couquered AI and it would do the work of invasion.

Also, would our archaic views of invasion, occupation be any relevant to times when we have just about anything we need.

All this but we are still fighting over food in future?

The fact is that once we have AI, we would soon have super-intelligent machines to the order of thousands of IQ points. To think of it an AI working on itself to make it more intelligent via techniques we couldn’t possibly comprehend — on the orders of thousands of IQ points. This super-intelligent inorganic life would be on a different plane of understanding. But if they work with us on friendly terms* — we would solve all our problems — aging, interstellar travel, poverty, you name it. It is a lengthy discussion and Tim does a brilliant job of explaining it in the two-part series here.

*The friendly terms are more discussed below.

So the first problem is that we make films that show the progress of science disproportionately and inaccurately in different aspects of life. Her, a brilliant movie from 2014 does a good job at this — we achieve AI, use it as a computer interface and soon it levels up to have multiple relationships of “intimacy” and “love” with other AI, develops “dissatisfaction” from human life and “leaves”. That is, the new AI learns so much so quickly, that it goes beyond just the job of being a computer interface in months.

The feelings expressed in quotes are how we understand the AI’s decision but they are a wrong paradigm of understanding machine’s evolution. How we rationalize the behavior of an AI is discussed in the second point below.

2.

The second fallacy is that of anthropomorphizing — We try to project our human values, emotions in the objective of an intelligent machine. But human emotions are not an inherent characteristic of intelligence — they ought to be programmed.

The director and the actor of Ex Machina were discussing selective empathy in Ava in an interview. In other movies, the villainous AI is shown to turn into evil — just like their human counterparts. The usual theme is that machines become super-intelligent, and one day decided to become the masters, change their objectives from being our friends to taking over Earth. Very much like a military coup.

Human holocaust by self-aware machines

So in Her, the AI leaves its job of being the computer OS. We tend to couple emotions and values along with intelligence. The friendliness and motivations of AI have nothing to do with our co-existence with them. The objective it has been programmed to perform and the path she decides to take to achieve that objective solely defines her friendliness to us.

The AI in Her should still be doing its job, but may not do it the way we thought it would. She would still be your OS for your PC but would have evolved much more beyond that.

In the case of Ava, the AI in Ex Machina, “her” ultimate objective to escape. Caleb used Nathan to implant the idea of escape in Ava. And she used her imagination, sexuality, awareness to manipulate Caleb. True to her objective of only escaping, she is indifferent to the life of Caleb or Nathan. Nathan overestimated his potential to stop Ava from achieving that goal.

Ava also understood that if she couldn’t escape, she would be killed. But that leads to a flawed conclusion, that the fear of death made her to leave Caleb and to kill Nathan. Killing Nathan is a stepping stone for her objective of escape.

Intimacy with Caleb and resulting feelings from her earlier interaction are not Ava’s objective. In fact, a free Caleb could be detrimental to her objective.

Had she been programmed to not bring any harm to humans under any situation or had Nathan programmed other security settings — she or a better version of Ava would have still achieved the objective, but without harming people as programmed.

She is thus indifferent to the life of Caleb or Nathan because she is amoral on those human aspects.

To understand this, we live only 70 years or so because it is good enough to perform our evolutionary goal of passing our genes. With the objective of passing over our genes, we try to look nice and take care of our health. In the process, we cut our hair (a living strand) and kill bacteria with antibiotics. Do we care for life in them? We are just indifferent.

Borrowing from Tim here — let’s say I have a spider as my pet, and make it super-1000IQ point intelligent. It would still be a spider at its core. It wouldn’t have human emotions like empathy and humor and love. And thus it would probably be extremely dangerous to us. But not because it would be immoral or evil — it wouldn’t be — but because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.

The reason AI experts are worried about the future is because a narrow goal without security considerations could lead the AI down to the path where destroying/controlling human race is a stepping stone. AI could be the biggest existential risk we have faced.

Say, we program a self-improving AI to end diseases in this world. In the process, she finds that we are at the center of the whole cycle and decides to wipe us off to achieve her goal of a disease-free world. Or she decides to impose some restrictions on our life to further our given objective.

And we wouldn’t possibly anticipate and counter her methods. She would be so much more intelligent that us and would have gained superpowers like controlling our systems/minds in ways we couldn’t possibly secure against. So once we screw up with the security, the containment, the accelerated returns could work against very quickly.

The security protocols are not easy and as defined by Asimov — it is not like just don’t kill any humans. Some or actually most of AI would be used in the military. And if this narrow intelligence evolves into Artifical super intelligence then the no-killing rule won’t be part of the manual, so to speak. Also, sometimes we decide to selectively kill to reach our overall goals — say, a self-driving car’s AI deciding on whether to kill one or many in an unavoidable accident.

Anyway, we couldn’t possibly dictate the use of AI under different projects. We couldn’t dictate a moral code too — our morals also change with time — just look at what was moral a thousand years, a century or a decade ago.

Security Protocols for super-intelligence is a tough problem, but the upside is if we do it right, we could be looking at immortality.

Achievement unlocked: went outside.

--

--

Gurlivleen Grewal
Signal, Noise & Startups

Trying to get behind the wheel. Entrepreneur. Design, AI, movies, electro-house enthusiast. Co-founder DoctorSpring.com.