The case for intelligence explosion, without science-fiction

This post is a reaction (but not a direct answer) to François Chollet’s The impossibility of intelligence explosion. You may want to read it before this one, as I reuse a fair number of his arguments. Also discussed on Hacker News.

The topic of “artificial intelligence with superhuman capabilities” is polarizing, to say the least. It has been discussed numerous times, with The Myth of a Superhuman AI and Superintelligence: The Idea That Eats Smart People.

I disagree with Chollet’s take. I think an intelligence explosion is possible.

I won’t link to any essay arguing for the possibility of intelligence explosion. It is, sadly, a minefield. Discussing the consequences and potential dangers of such a scenario can, and often will, quickly derail into wild speculation. To quote Chollet:

This science-fiction narrative contributes to the dangerously misleading public debate that is ongoing about the risks of AI and the need for AI regulation.

I agree. AI hype is already bad enough to add a layer of fear-mongering on it. Here, I too will ground my reasoning on concrete observations.

What intelligence does

There is, as Chollet says, no fully “general” intelligence. You cannot build a universal problem solver. A given intelligent system will solve a specific problem, in a specific context. For instance, humans can make a sandwich to solve the problem of eating tasty food. Making sandwiches is a typically human task, humans use their intelligence to do it, and you can’t dismiss the intelligence of an octopus because it is helpless with sliced bread.

Conversely, you can’t say we aren’t intelligent because we can’t move our tentacles properly, if we had some. Our brain just isn’t fit for the task.

We measure our intelligence by the difficulty and variety of the problems we can solve. We can’t put intelligence on a scale because those come in all shapes and sizes. Comparing playing basketball and playing chess is apples to oranges.

You can compare the intelligence of two humans as long as the problems they solve overlap enough for the comparison to make sense. For instance, I’m smarter than my 10-years-ago past self. My programming skills are better, my English is better, I have a better grasp of my emotional state, etc.

(Side note: factoring out physical prowess, we could refine the role of intelligence to the quality of the decisions we make, to a purely mental process, but that is irrelevant to the current debate.)

I can’t compare myself to a dolphin, or an ant. My brain may be more complex than an ant’s nervous system, but I don’t get to pick the context that will make me shine.

In particular, I can’t compare myself to a machine. Of course I can walk and talk and my oven cannot, but it has a superhuman thermostat to regulate its temperature.

I realize this sounds very silly. The point I’m hammering is that we should not focus on any supposed race between human and machine. There is a distinction between being able to perform a specific task better (in a given environment), and being able to perform a wider range of tasks.

Which leads us to the part where we get smarter.

Improvement and its bounds

A machine learning model gets much better at a specific task during its training. It gets arguably smarter in the process, then the improvement stops. There are limits to the generalization power of a model.

We can argue that a human gets much better at the specific task of being human over the course of its life, even if this task involves a very diverse array of subtasks. We extend our own capabilities by building tools, and indeed we owe our most impressive achievements to the use of technology. Humans without their fancy toys aren’t helpless, but definitely less capable.

We don’t know where are the limits of our own power. All we know are upper bounds : laws of physics and our biology (to the extent we don’t tinker with it too much). It leaves ample room for progress, being more efficient as humans, and expanding further the range of tasks under the “being human” umbrella.

However, we aren’t interested in how high we can go, but how fast we can go there. The training speed of a given machine learning model is capped by hardware, how well we can parallelize operations, etc. Usually, when hitting diminishing returns, we replace the model by another one (e.g. the successive versions of AlphaGo, the progress of image recognition systems, etc).

In other words, the overall capability of machines for improvement is currently capped by the laws of physics, and the capability of humanity to come up with better hardware and software.

How fast does humanity learn? Technology progresses very gradually (Chollet observes a linear rate of progress). Some advances have noticeably more impact than others (like the Industrial Revolution), some enable our progress to speed up a little (like better communication between scientists, or people having more time to devote to innovation instead of survival). Humanity will, unless otherwise hindered, do unimaginable wonders 10,000 years from now.

Yet, we don’t observe nor expect human innovation to ever explode, inventions being churned out of labs at an absurdly fast pace. The reason is that, much like machine progress is limited by humans, human progress is limited by… humans.

Our civilization makes progress by aggregating contributions from individuals, communicating mostly through sight and hearing. We lose skilled agents through death, and are incapable of directly improving our brains (which have most of the reasoning and tool-wielding capabilities).

I’m not arguing for enhancing human brains, fighting death, etc. here. I promised no science-fiction. My point is, there is a strong case against humans recursively, quickly improving themselves. We do progress faster than our ancestors, mostly because we spend less resources to survive. But we’re hitting diminishing returns on this and it won’t explode.

My other point is, these arguments don’t apply to artificial intelligence.

A qualitatively different recursion level

Humanity stands on the shoulders of two giants: our evolutionary history, and our civilizational history.

Evolution works through cascading improvements. Generation after generation adapts to its environment, slowly but steadily better, piling on previous successes. Useful information is transferred mostly through genetic material. The formula hasn’t changed much for eons, even with a few innovations along the way (e.g. DNA itself). No sign of a biological explosion.

Civilization also involves cascading improvements. Through culture, records, technology, generation after generation solves more tasks than its predecessor, capability rising steadily. The formula hasn’t changed much for millennia, even with a few innovations along the way (e.g. agricultural and industrial revolutions). No sign of a human progress explosion.

Both optimization processes enable steady growth, with known bottlenecks. Yet, upon comparison, humanity’s impact far, far outshines whatever the evolutionary process came up with. We didn’t even need to understand biology’s solutions to problems, and we invented planes before figuring out how to reproduce birds’ flight.

Civilization doesn’t bypass biology. Humans do not (yet) exploit and tinker with their own DNA. However, the ladder of progress we’re climbing as a civilization is qualitatively different from the one on which the rest of the biosphere is.

In other words, if the universe gives us a task, sometimes humans are able to say “we don’t need to grow limbs for this, we’ll find our own way”.

I argue that the same goes for artificial intelligence, relative to humanity.

In another essay, Chollet outlines his long-term vision for machine learning. He describes (warning: over-compressed summary) models drawing from libraries of high-performing sub-models, crafted automatically by a meta-learning system, achieving the ability to generalize to new tasks, reusing previously learned techniques.

Such a learning system is still bound by its environment. Its growth will encounter bottlenecks. It will collect data as needed, experiment on its own. Humans are not needed for that. We are the original designers, we provide the computation power, but the learning process itself isn’t bounded by ours.

This means a sufficiently autonomous learning system has its own progress ladder. The factors limiting our progress are mostly specific to our human standpoint. The design of an artificial system disregards entirely our own biology, the speed of our communication, the process by which our brains accumulate knowledge, etc. It may climb its ladder much slower than us — or much faster.

Furthermore, a software system capable of introspection would have the opportunity to recursively self-improve, to enhance its own optimization process. This would result in a faster climbing rate overall, though the magnitude of the improvement is unknown.

Conclusion, explosion

An autonomously learning system, unbounded by the usual restraints of human progress, however still limited by system bottlenecks and diminishing returns, will possibly achieve a linear (or sigmoidal) growth rate fast enough to be explosive in the same way human civilization exploded relatively to all other known life forms.

That’s my argument for the possibility of intelligence explosion. It is not the “unlimited super-quick progress” singularitarian view. It does not require science-fiction. Most importantly, it does not speculate on the consequences of it happening.

The explosion term is bad. It evokes a runaway nuclear reaction. An out-of-control monster. It’s scary. The ideal hype fuel. My argument fits in a single paragraph, but without proper setup and whole sections of context, it would have been another baseless claim on the Internet.

I kept the explosion metaphor because the term remains a convenient pointer to the scenario I outlined above. There is no indication about when it could happen, nor about the specifics of how it is implemented.

To restate my core claims: if we build a system able to abstract new tasks, to learn from the physical world, without human input, then it will grow at a rate mostly uncorrelated with human progress, and possibly faster. I also posit that such a system can be built. It is not impossible.

What to do about it is an entirely other debate.


I want to thank François Chollet for his serious, non-derisive words. Writing this essay gave me the opportunity to clarify my ideas and learn a lot.

I also thank all the people having thought about this before me, whose work I purposely didn’t link. They are referenced in the essays I mention at the top.

Extra thanks to the proofreaders of this essay. You’re awesome. Finally, thank you for reading!

Jérémy Perret (also on Twitter)