We don’t Need Super Intelligence for AI to be Super

In The Impossibility of Intelligence Explosion, François Chollet writes:

Arguably, the usefulness of software has been improving at a measurably linear pace, while we have invested exponential efforts into producing it. The number of software developers has been booming exponentially for decades, and the number of transistors on which we are running our software has been exploding as well, following Moore’s law. Yet, our computers are only incrementally more useful to us than they were in 2012, or 2002, or 1992…In this case, you may ask, isn’t civilization itself the runaway self-improving brain? Is our civilizational intelligence exploding? No. Crucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time.

Using technological advancement as a corollary for “civilizational intelligence”, if the number of transistors is “exploding”, this means our intelligence is exploding as well, because it is increasingly difficult to maintain Moore’s law, yet we have. The amount of progress since 1750, is equivalent to the amount of progress from 12,000BC to 1750. Saying “our computers are only incrementally more useful” reminds me of Peter Thiel in his famous “What Happened to the Future” manifesto coining “We wanted flying cars, instead we got 140 characters” while also, might I add, touting Facebook, a company he is an investor in, as an example of true technological progress. Using the smartphone as an example, the advances of these devices are far more than simply “incrementally useful”, even since 2012.

A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems — which they don’t in practice.

Yes, they don’t in practice because many with a high IQ, among other inhibitors, have difficulty with social interaction, something computers don’t need to worry about. Even if we can only replicate the intelligence of an average brain, this would be a huge breakthrough. Just look at all those things on your TODO list. What if you had 0 distractions, were always motivated, never had to sleep, eat etc. The amount you could get done would be amazing. And then what if we duplicated that average brain as much as we wanted, wouldn’t that be super?

Originally published on no gradient.