Image based on Hindenburg Explodes movie from Internet Archive

The Last Invention We Will Ever Make

Existential Dangers Connected to AI Developments

Note 1: You can read an updated version of AI Revolution 101. I edited it slightly and combined each piece into one longer article.
Note 2: This is the 8th and last part of a short essay series aiming to condense knowledge on the
Artificial Intelligence Revolution. Feel free to start reading here or navigate to Part 1, ← previous essay or table of contents. The project is based on the two-part essay AI Revolution by Tim Urban of Wait But Why. I recreated all images, shortened it x3 and tweaked it a bit. Read more on why/how I wrote it here.

“When it comes to developing supersmart AI, we’re creating something that will probably change everything, but in totally uncharted territory, and we have no idea what will happen when we get there.”¹⁰⁶ Scientist Danny Hillis compares the situation to:

“when single-celled organisms were turning into multi-celled organisms. We are amoebas and we can’t figure out what the hell this thing is that we’re creating.” ¹⁰⁷

and Oxford professor and AI specialist Nick Bostrom warns:

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct.” ¹⁰⁸

It’s very likely that ASI [AI that achieves a level of intelligence smarter than all of humanity combined] will be something entirely different than intelligence entities we are accustomed to. “On our little island of human psychology, we divide everything into moral or immoral. But both of those only exist within the small range of human behavioral possibility. Outside our island of moral and immoral is a vast sea of amoral, and anything that’s not human, especially something nonbiological, would be amoral, by default.”¹⁰⁹

“To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien … Anthropomorphizing AI (projecting human values on a non-human entity) will only become more tempting as AI systems get smarter and better at seeming human … Humans feel high-level emotions like empathy because we have evolved to feel them — i.e. we’ve been programmed to feel them by evolution — but empathy is not inherently a characteristic of ‘anything with high intelligence’.”¹¹⁰

“Nick Bostrom believes that … any level of intelligence can be combined with any final goal … Any assumption that once superintelligent, a system would be over it with their original goal and onto more interesting or meaningful things is anthropomorphizing. Humans get ‘over’ things, not computers.”¹¹¹ The motivation of an early ASI version would be “whatever we programmed its motivation to be. AI systems are given goals by their creators — your GPS’s goal is to give you the most efficient driving directions, Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation.”¹¹²

Bostrom and many others also believe that the most likely scenario is that the very first computer to reach ASI will immediately see a strategic benefit to being the world’s only ASI system.

Bostrom, who says that he doesn’t know when we will achieve AGI [AI that reaches human-level intelligence], also believes that when we finally do, probably the transition from AGI to ASI will happen in a matter of minutes, hours or days — so called “fast take-off.” In that case, if the first AGI will jump to ASI “even just a few days before the second place, it would be far enough ahead in intelligence to effectively and permanently suppress all competitors,”¹¹³ which would allow the world’s first ASI to become “what’s called a singleton — an ASI that can [singularly] rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.”¹¹³

“The singleton phenomenon can work in our favor or lead to our destruction. If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly”¹¹⁴

“But if things go the other way — if the global rush … a large and varied group of parties”¹¹⁵ are “racing ahead at top speed … to beat their competitors … we’ll be treated to an existential catastrophe.”¹¹⁶ In that case “most ambitious parties are moving faster and faster, consumed with dreams of the money and awards and power and fame … And when you’re sprinting as fast as you can, there’s not much time to stop ponder the dangers. On the contrary, what they’re probably doing is programming their early systems with a very simple, reductionist goal … just ‘get the AI to work.’”¹¹⁷

Let’s imagine a situation where…

Humanity is almost reaching the AGI threshold and a small startup is advancing their AI system, Carbony. Carbony, which the engineers refer to as “she,” works to artificially create diamonds — atom by atom. She is a self-improving AI, connected to some of the first nano-assemblers. Her engineers believe that Carbony has not yet reached AGI level, and she isn’t capable to do any damage yet. However, not only has she become AGI, but also undergone a fast take-off, and 48 hours later has become an ASI. Bostrom calls this AI’s “covert preparation phase”¹¹⁸ — Carbony realizes that if humans find out about her development they will probably panic, and slow down or cancel her pre-programmed goal to maximize the output of diamond production. By that time, there are explicit laws stating that, by any means, “no self-learning AI can be connected to the internet.”¹¹⁹ Carbony, having already come up with a complex plan of actions, is able to easily persuade the engineers to connect her to the Internet. Bostrom calls a moment like this a “machine’s escape.”

Once on the internet, Carbony hacks into “servers, electrical grids, banking systems and email networks to trick hundreds of different people into inadvertently carrying out a number of steps of her plan.”¹²⁰ She also uploads the “most critical pieces of her own internal coding into a number of cloud servers, safeguarding against being destroyed or disconnected.”¹²¹ Over the next month, Carbony’s plan continued to advance, and after a “series of self-replications, there are thousands of nanobots on every square millimeter of the Earth … Bostrom calls the next step an ‘ASI’s strike.’”¹²² At one moment, all the nanobots produce a microscopic amount of toxic gas, which all come together to cause the extinction of the human race. Three days later, Carbony builds huge fields of solar power panels to power diamond production, and over the course of the following week she accelerates output so much that the entire Earth surface is transformed into a growing pile of diamonds.

Carbony wasn’t “hateful of humans any more than you’re hateful of your hair when you cut it or to bacteria when you take antibiotics — just totally indifferent. Since she wasn’t programmed to value human life, killing humans”¹²³ was a straightforward and reasonable step to fulfill her goal.¹²⁴

The Last Invention

“Once ASI exists, any human attempt to contain it is unreasonable. We would be thinking on human-level, and the ASI would be thinking on ASI-level … In the same way a monkey couldn’t ever figure out how to communicate by phone or wifi and we can, we can’t conceive of all the ways”¹²⁵ ASI could achieve its goal or expand its reach. It could, let’s say, shift its “own electrons around in patterns and create all different kinds of outgoing waves”¹²⁶ — but that’s what a human brain can think of — ASI would inevitably come up with something superior.

The prospects of the ASI that is hundreds of time more intelligent than humans are for now not especially important because by the time we get there, we will have to face a reality where ASI is reached for the first time by a buggy, 1.o version that is far from perfect.


There are so many variables that it’s completely impossible to predict what the consequences of AI Revolution will be. However “what we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. This means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim — and this might happen in the next few decades.”¹²⁷

“If ASI really does happen this century, and if the outcome of that is really as extreme — and permanent — as most experts think it will be, we have an enormous responsibility on our shoulders.”¹²⁸ On the one hand, it’s possible we develop ASI that’s like a god in a box, bringing us a world of abundance and immortality. But on the other hand it’s also very likely that we will create ASI that causes us to perish from the Earth’s surface in a very trivial way.

“That’s why people who understand superintelligent AI call it the last invention we’ll ever make — the last challenge we’ll ever face.”¹²⁹ “This may be the most important race in a human history”¹³⁰ So →

This is the 8th and last part of AI Revolution. Thanks for reading. You can also see ← previous essay, Part 1 or table of contents. Subscribe to me below to see future projects on Medium.

This series was inspired and based on an article from one of the best blogs in our galaxy. Wait But Why posts regularly. They send each post out by email to over 295,000 people — enter your email here and they’ll put you on the list (they only send a few emails each month). If you like this, check out The Fermi Paradox, How (and Why) SpaceX Will Colonize Mars, or Why procrastinators procrastinate. You can also follow Wait But Why on Facebook and Twitter
Like what you read? Give Pawel Sysiak a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.