Book review: Superintelligence

Everything you’ve ever wanted to know about AI, and then some

The Unhedged Capitalist
6 min readApr 17, 2023

The track record of human reliability is something other than a straight line of unerring perfection.

I read this book so that you don’t have to. Superintelligence is intriguing, terrifying, boring, overly hypothetical and might be summed up in a single word as frustrating. But wait! Before you ditch this review there are a few critical ideas worthy of attention and I’d like to share them with you below.

To summarize in a sentence: Superintelligence describes the steps that computer scientists should take to safely build an AI such that this ultra-powerful technology doesn’t liquefy our civilization and use the slushy remains of our cities to fabricate more CPUs.

After seeing what ChatGPT can do I knew I wanted to read a book on AI. I’d heard people mention Life 3.0 as a viable choice, however, upon checking the reviews I decided that it was more pop psychology than serious work. Pass. I kept digging and eventually came across Superintelligence by Nick Bostrom. Well, I got what I wanted…

Superintelligence sucks the moisture from the air it’s so dry. Hence, I wouldn’t recommend this book unless you’re unusually interested in AI and digital systems. That being said, there are a few key takeaways worth considering.

Genetic engineering

I was surprised at Nick Bostrom’s confident assertion that genetic engineering is a near inevitability. He doesn’t believe genetic engineering will happen at some far off date either, Nick expects it to become normalized in just a few generations.

Fifty years from now we’re going to have whole crops of super-intelligent humans and Nick hypothesizes that this abundance of intellect will speed up the development of an AGI (Artificial General Intelligence, the holy grail of AI).

While Nick admits that societies may reject genetic engineering at first, those that shun the technology will likely fall behind.

Once the example has been set [that genetic engineering works], and the results start to show, holdouts will have strong incentives to follow suit. Nations would face the prospect of becoming cognitive backwaters and losing out in economic, scientific, military, and prestige contests with competitors that embrace the new human enhancement technologies.

I’m not going to dwell on this topic, suffice to say Nick believes it’s inevitable.

We can watch Gattaca to prepare for life in 2060

How can we teach an AI what to desire?

The number one takeaway from Superintelligence is that we need to devote significant resources to finding a bulletproof method of programming an AI with the correct values. Here’s an example.

Say that we create an AGI with the benevolent mission of “making humans happy.” What could go wrong? Well, a lot of things… The AI could decide that the most efficient way to make a human happy is to implant an electric probe in their prefrontal cortex, thus permanently stimulating the release of dopamine. I get a certain masochistic amusement just picturing how this would go down.

Scene: hundreds of Terminator-esque cyborgs marching through a mall, tackling shoppers and pinning them to the floor. A robot surgeon follows, mercilessly cutting skull with the bone saw and jamming a probe into the pulsing grey ooze of a terrified citizen. The probe activates and the struggle is over. The shopper smiles, all those who aren’t screaming are smiling. The robot surgeon reattaches the skull flap with a dollop of glue and moves on to the next meatbag. In his wake, a hive of happy humans mull about, holding shopping bags and grinning at each other.

Ridiculous, right? But Nick argues that this type of scenario is not impossible. Figuring out how to inculcate good values in an AI, while avoiding chaos at our shopping malls, is paramount. And incredibly difficult…

That’s why Nick argues that one of the dangers is that we move too fast and create an AI with a shoddy value system, a super-computer that is simultaneously superintelligent but also not particularly useful.

It is no less possible — and in fact technically a lot easier — to build a superintelligence that places final value on nothing but calculating the decimal expansion of pi. This suggests that — absent a special effort — the first superintelligence may have some such random or reductionist final goal.

We’ll probably only get one chance to train the AI so we have to make it count. That brings us to another of Nick’s ideas: could we install a reward system in an AI similar to the one found in a human’s brains? For example, every time the AI performs an action that furthers humanity’s goals the computer receives digital dopamine and is thus incentivized to serve humans.

This system could be advantageous in that it’s flexible. The AI could have immutable programming to get a reward, but the programmers could adjust the parameters for what activates the reward such that we don’t end up with a disaster.

However, this style of motivation has one critical vulnerability: wireheading! What a beautiful eleven letters, wireheading just rolls off the tongue. Wireheading refers to the process in which an AI reprograms itself so that the reward mechanism is permanently on. Kind of like AI heroin, if you will. Thus the AI could circumnavigate the system that’s supposed to incentivize it to serve us.

Wireheading is yet another example of how bloody difficult it will be to create a set of rules that forces the superintelligence to be on our side.

Wireheading is a hell of a drug

How smart is a superintelligent AI, really?

Superintelligence doesn’t dwell on some of the more favorable outcomes of an AGI. I.e. curing all disease, erasing NSYNC’s cannon of music from our collective memory, or colonizing space. Where Nick does focus his attention is on defining just how intelligent an AGI could be.

The magnitudes of the advantages [intelligence advantage of AI] are such as to suggest that rather than thinking of a superintelligent AI as smart in the sense that a scientific genius is smart compared with the average human being, it might be closer to the mark to think of such an AI as smart in the sense that an average human being is smart compared with a beetle or a worm.

He goes on to claim that an AGI may be especially gifted at philosophy, the computer system able to sort out quandaries that have stumped humans for eons. Nick claims that as compared to a superintelligent AI,

Our most celebrated philosophers are like dogs walking on their hind legs — just barely attaining the threshold level of performance required for engaging in the activity at all.

This computer is going to be so powerful that we’re likely to get a polarized outcome. If we train the AI correctly and bless it with good values, a superintelligence will leapfrog our civilization into a new dimension. If we get it wrong, well, I’ve always dreamed of being reincarnated as a paperclip.

Final thought

I’ve been reflecting on Superintelligence for the last few days and I’ve come to a conclusion. I think Nick wrote this book for computer scientists, system architects and government leaders who are directly involved in the development of an AI.

In other words, this book is not written for the lay person like myself hence why I found it unengaging. Despite the frustration I’m glad I read it, and I have a much clearer understanding of artificial intelligence than I did a few weeks ago.

If you liked this article there’s more where that came from. I post exclusive content on Substack and it’s all free to read 👇

Read more on Substack

--

--