Don’t Fear the Super AI

Eric Lammertsma
Pixplicity
Published in
9 min readAug 28, 2020

--

Let me put it this way: I’m not worried about invisible people teleporting into my bedroom, so you shouldn’t be worried about an all-powerful Super AI.

This topic is a strange one, as the reason to write about it is to tell you that it doesn’t merit writing about. The discussion about Super AI is about as important as discussions like “what should we do when teleportation is readily available to everyone?” or “how can we pair the right to privacy with the right become invisible?” As a matter of fact, talking about the mere concept of a Super AI is more harmful than you might think because it gives the impression that “AI” already has a solid foundation and the risks are looming large.

The idea of a Super AI is a fantasy extrapolated from science by Hollywood in the same way that teleportation and invisibility science fiction is conjured out of tidbits of science fact. These form fantastical problems for which no real solutions exist because the reality of them is so far removed from where we are today — exacerbated by the fact that those who imagine and write about the horrors that await rarely truly understand how these sciences work. These types of extrapolations make for great Michael Crichton books. The fictional, stretched-to-the-limit end-result — dino DNA to prehistoric behemoths romping around our cities — is simply much easier to envision than the vastly complex and limiting real science underlying it. T-rex, super AI, teleportation, and invisibility may all very well become reality one day, but we’re nowhere near them today. Not by any measure.

Should we be more afraid of AI or genetic research? I vote neither!

First, let’s define what a Super AI, or superintelligence, or artificial superintelligence, is. We generally describe three types of AI: Narrow, General, and Super. A Narrow AI is an AI that is, as the name suggests, capable of performing a single or narrow set of tasks, such as detecting when a photo subject has their eyes open before snapping the shot. A General AI is, again as the name suggests, capable of performing more general tasks over a broad range of abilities. It would potentially be able to wake you up earlier in the morning and have your breakfast ready in time to drive you to work despite unusual traffic, still getting you to work on time. It would also be able to help you with your work, regardless of whether you’re a construction worker or a microbiologist. A General AI combines the abilities of many (or all) Narrow AIs. Then comes the Super AI. This AI surpasses all human comprehension and intellectual ability and is essentially capable of handling everything and anything. Because of this, it would operate outside of our boundaries and control, controlling every aspect of our lives and its own existence. It would control every bit of the internet, every satellite, every vehicle — everything — placing us at its mercy. We would simply have to hope we developed it with intentions and goals that prevent it from causing us harm — or stuffing us all in Matrix-style pods for our protection.

Super AIs would theoretically even be more creative than us

By these definitions, what we have today are Narrow AIs, but I would even argue that AI as a general concept hasn’t even been developed yet. What would that be? Google Assistant? Siri? While I’m a huge fan of them and they’re fantastic technical showcases of today’s capabilities, they’re laughably stupid from an “intelligence” perspective and do nothing even comparable to thinking. The inclusion of the word intelligence makes these machine learning showcases sound like they have some understanding of what they’re doing — but they most certainly do not. These things just relentlessly try, with middling accuracy, to recognize bits and pieces of language from what you’re asking and look through an inherently limited number of data sources to get a rudimentarily coded response about the weather, your calendar or, if it doesn’t have a proper source, a generic web search. These have an extremely limited number of very specifically pre-programmed functions of which their greatest difficulty (and achievement) is discerning which one you want. If you ask “What’s the temperature?” did you want to know the current temperature or the definition of temperature? From weather.com or your thermostat? They don’t understand anything. The AI has no idea, so developers manually code in assumptions to help them along. They’re just tuned to trigger on certain keywords and phrases that tell them which lever to pull — in this case, it’s the weather.com lever — which is all programmed by hand to deliver the right number to your device to speak out loud.

Today’s assistants strive to be General AIs.

The more recent and impressively fancy GPT-3 might make you think we’re further ahead than that, but that too is misleading. GPT-3 is truly mesmerizing in what it’s capable of: taking simple prompts and generating entire works of poetry or prose that are mostly coherent and could in some cases even be mistaken for human writing. It’s a great advance in machine learning. The caveat is that it’s not doing anything we couldn’t already do before. The biggest differentiator is that the company behind it, Open AI, simply threw ten times more data and money at it than had ever been done before (reportedly in the order of tens of millions of dollars just to train it) so it simply does its work more accurately. Its function remains simple. Given a word, it predicts what the next word will be, taking the previous context into account. It’s a wonderful word predictor that has simply learned the most probable order of words from millions of sources. While it’s coherently gluing words together, it has no clue what it’s writing about, but to us, it looks like it has become an expert on every topic.

Does constructing sentences equal intelligence?

What we have today are not AIs. These are implementations of machine learning and all machine learning does is take a big, lumbering algorithm and slowly and painstakingly tweak millions of values and parameters until you’re satisfied that when you show it a cat, it says it’s a cat 95% of the time. You could probably get it to the aspirational “five 9s” of accuracy (99.999%) but that would take a hell of a lot of photos — and outside of academia you probably don’t need some automated way to recognize cats that urgently or precisely. Or you can make it more complex and have it crash a car around a virtual street tens of millions of times, scolding “NO!” and praising “that’s better” until it finally understands how to use its sensors and the steering wheel to parallel park without hitting anything 99.999% of the time.

These systems do not understand anything — even what they’re doing. They make no actual decisions. They take a predefined input and, after enormous amounts of trial-and-error-style learning with punishment and rewards through that lumbering algorithm, formulate a predefined output to some degree of certainty. The trial-and-error prerequisite means that any task must have clearly defined, measurable right and wrong outcomes, or some degree in between.

A “Super AI,” on the other hand, would need to understand things and make decisions that have no possibility for prior learning. What is the outcome of a war between Mexico and Canada? Who knows? Maybe China wins it. There’s no history to base decisions off of and no experience to learn from, let alone millions of examples of war to churn and study (fortunately!). You can’t “play” it against itself Hollywood “WarGames” style either, because that means you’d have to give it clear rules where there are none and provide a plethora of information that you don’t have. (By the way, what a beautiful bit of foresight WarGames is, showcasing a modern adversarial machine learning model…)

The WarGames AI calculated that there was no way to win a nuclear war. Whew!

“AI” players in games can only form seemingly ingenious strategies by analyzing millions of games and endlessly playing against themselves in settings with very tight sets of rules. These strategies aren’t ingenious at all. A singular, brilliant move you might admire was simply previously discovered as a viable move when it played itself at game #2,003,509 and it was “memorized” in that lumbering algorithm through a little thumbs-up on the pathway that got it there — but only after over 2 million failures to discover it. Furthermore, if there’s a mistake in the game or a rule is lacking, that’s going to form an integral part of the “AI”s strategy because it doesn’t know any better. It will mercilessly cheat in our eyes because we didn’t truly give it the boundaries we play by. It just bluntly hammers away at all the (largely silly) options until it randomly lands on one that works. It boils down to automated cherry-picking of results and after millions of ridiculous failures, it may finally show you one that looks genius — while it most certainly is not.

OpenAI 5, defeating professional players in Dota 2. Credit: The Verge

It’s the monkey typing Shakespeare — you know the saying — and that’s actually very close to what machine learning is, with the exception that you give the monkey a banana every time it gets a little closer. Typing a full word, putting two words together, and eventually compiling a whole sentence or verse earns a reward. After 10 million years of typing and finally hammering out one of Shakespeare’s works, I’m not going to laud that monkey as being hyper-intelligent or even understanding Shakespear’s basic ideologies, and I’m certainly not afraid of that monkey grabbing a gun and bringing about the Planet of the Apes. It just knows the correct sequence of key presses that result in bananas and a machine learning system just tweaks an algorithm that results in its version of a banana: the lowest error.

A monkey getting started on his million-year task. Credit: New York Zoological Society

We can get our machine learning systems to recognize cats (or, more relevantly, tumors, cars, and pedestrians), to beat people in games, and to set reminders, but anything past simple automation and pattern recognition is pure science fiction. Advances are certainly being made, and the advent of quantum computing will help propel machine learning to tackle immensely larger problems with more data, but right now it’s far more useful to consider how to make current machine learning approaches better and more useful, rather than worrying about some fictional all-in-one cat-spotting AI becoming too powerful.

The only realistic worry about AI, if you’re in the business of worrying, is the polar opposite of “Super AI.” Instead of considering the threat of AI intentionally harming people, worry about people harming people by overestimating AI as Super AI alarmists do. Putting too much faith in what machine learning is capable of puts people’s lives at risk by inevitably placing a big, lumbering algorithm where there absolutely shouldn’t be one. Current machine learning capabilities would certainly include the operating of a draw-bridge in clear weather, but we’re nowhere near trusting them to do automated open-heart surgery. That is how AI will most assuredly, and in the much nearer future, cause harm and cost lives.

An AI in charge of a deadly missile system

But I’m not in the business of worrying. I’m a dreamer and a futurist. Machine learning is a tool like a pen, a power drill, a 3D printer, or any other and can be wielded for great good. Sure, you can 3D print a gun, but you can also 3D print millions of ear-saving, face-mask clips for healthcare workers. So, despite what it may sound like, I’m an enormous fan of machine learning and all its potential. I’m simply telling you not to be worried about it. As a matter of fact, I work at Pixplicity, where machine learning projects are a part of daily life, developing smart speaker systems and numerous applications that can, for example, change the time of day in photos, create 3D models from 2D images or generate new speech in a given speaker’s voice. We’re in the early stages of AI and already there are endless possibilities. There is so much potential in the current capabilities that it’s overwhelming and difficult to choose a direction because they’re all so exciting. In the words of one of my favorite machine learning YouTube channels, Two Minute Papers: what a time to be alive!

--

--

Eric Lammertsma
Pixplicity

I’m a futurist. Not a day goes by where I’m not dreaming about what could be or what some new tech can do to change the world. Head of Product at Tilig.