Superintelligence (Nick Bostrom): Summary

Koza Kurumlu
9 min readMay 10, 2022

Introduction

In this post, I will quickly summarize the main points from Bostrom’s Superintelligence. By the end, you will have learnt about AGI, superintelligence, the arrival of superintelligence, paths to superintelligence, types of superintelligence, possible motives — dangers involved and containing superintelligence.

What is AGI?

AGI is short for artificial general intelligence. Well, what does that mean? Currently, an AI algorithm can play chess, a different algorithm can drive cars, another one can recognise faces, and a whole new one can do NLP. All these algorithms are extremely specified in their respected niches, as in they can only do one thing, the algorithm which drives cars can just learn how to play chess without being amended, however, humans aren’t like that. Humans can optimise in cross domains, i.e. humans can learn to drive a car and then learn to play chess. And that’s what humans superior to AI, in one way, at this point in time. AGI is human-level AI. AGI is a new level of AI that can optimise its skills in cross domains, rather than one niche.

What is Superintelligence?

Superintelligence is similar to an AGI system, but there is one key difference. AGI is human-level intelligence, whereas superintelligence is more intelligent than humans. It won’t only be more intelligent in current domains but even in new domains which humans won’t understand. A super-intelligent system won’t just be as smart as a human genius, no. To be able to understand the scale, think of how much more intelligent we are compared to a worm. That is roughly how much smarter a superintelligence will be compared to us. We can also use this analogy to talk about domains. Not only are we better at transportation than a worm (which it can do), we are also better at maths, but a worm doesn’t even know what maths is. This is what I mean by a new domain.

Arrival of Superintelligence

So how far away are we from this sci-fi dream. Well, first we would have to calculate how far away AGI is because it is highly likely a superintelligence will evolve from AGI, or be created with the help of AGI. After a survey conducted by 2 dozen researchers within the field of AI, they believed that there is a 50% chance of AGI arriving before 2050 and a 90% chance of it arriving before 2095. That’s not too far away.

Now after AGI, how long will it take for a superintelligence to arrive. The same group of researchers concluded that there is a 50% chance of superintelligence arriving only 2 years after AGI and a 75% chance of it arriving 30 after an AGI system has been discovered.

Paths to Superintelligence

The difference between normal algorithms and AGI/superintelligence is that it would have recursive self-improvement. This means the algorithm will be able to automatically fix its errors at computer speed. This would cause an intelligence explosion and a skyrocket in the algorithm’s capabilities. Below are some methods to reach that stage.

Whole brain emulation

In this method, intelligent software would be produced by scanning and closely modelling the computational structure of a biological brain, and making this brain work on a computer. So how would this work, firstly you would need to scan a brain very intricately, and you would need to stabilize it post-mortem. Then to be able to scan it, you would need to cut it into extremely thin slices and have the slices scanned. The scans will then be generated into a 3d structure and hooked up to a powerful computer which can enable this ‘brain’ to either live virtually or in this world via robotics or something. There are problems though:

  1. The microscopy is not at a high enough standard to fully capture all the important details in scans at a high enough resolution.
  2. How could we handle these microscopic layers of tissue?
  3. Data could be an issue, how can you store it?
  4. Another big one is its functionality, how could you ensure it functions in the right way?
  5. And finally computer power, is it enough to simulate a living, thinking brain?

Biological cognition

This method would be to enhance the functioning of our current brains. In theory, this doesn’t need a machine, it could be done with selective breeding, but as you may imagine that would come across many political hurdles. As well as that, the selection would have to be extremely strong, and even then it would take several generations.

However, if we make a small tweak it might work. Consider natural selection but on a gamete level. First, you would need to genotype embryos, and then select embryos which have favourable characteristics. Then after that, extract stem cells from those embryos and convert them to sperm and ova. Then cross the new ova and sperm to produce new embryos which are even better than the last. Repeat this process until there are large genetic changes. If we do it like this we can go through dozens of generations in just a few years, therefore, speeding up the procedure and cutting expenses at a huge rate.

Brain-computer interfaces

This path suggests that humans should exploit the pros of computers, such as high processing power, data transmission etc., usually via implantation. This seems like it would give humans a boost but honestly, I don’t think we would reach superintelligence, as we already use computers today for that stuff. All it would do is it would decrease the speed of the interaction between you and the computer. Other than that, there are some other problems. Brain implantation is very dangerous, and even when done properly can cause a human to lack behind in other things, such as verbal pronunciation. This was seen when some people with Parkinson’s disease were implanted, with something to help muscle stimulation. Secondly, the brain might not be able to interact properly with the computer rendering the whole thing useless. And finally, coming back to my first point we simply don’t need it, it’s not worth the risk for only a tiny bonus. We already have computers.

Networks and organisations

The next method explores a way of reaching superintelligence via the gradual enhancement of networks and organisations. This idea in simple terms would link together various bots and form a sort of superintelligence called collective superintelligence. This wouldn’t increase the intelligence of every single bot but rather collectively it would reach superintelligence. we will discuss collective superintelligence in the next section.

As an analogy, think of how much humans have developed together over the centuries. Collectively, we have reached a standard of intelligence that is higher than every single individual person. But now imagine this, but on a machine level. The technical side of this hasn’t really come together yet, but the frontrunner for a nice experiment example would be the internet. Just think of how much data and information is stored there, all of it is unexploited, could the internet just ‘wake up’ someday. I don’t know, but sadly unlikely.

Forms of Superintelligence

Speed Superintelligence

This is quite easy to define and is very similar to AGI. Speed superintelligence is simply an algorithm that can do anything a human can do but faster. But when I mean much I don’t mean slightly or quite faster, I mean by several orders of magnitude faster.

An emulation running at 100,000x faster than a human would be able to read a book in seconds and write a PhD thesis in an afternoon. To such a fast mind like this, where it not only calculates things quicker but interprets them quicker, time in the real world would seem much slower. As in if I dropped my ice cream, it would sort of fall in slow motion from the view of this superintelligence. The reason I say sort of is because time doesn’t slow down, it is just interpreted quicker. Because of this apparent time dilation, it would be more efficient for this intelligence to live in a virtual world and deal with information virtually so that it doesn’t have any limits.

Collective Superintelligence

The next type is called collective superintelligence, and we touched on it slightly when talking about networks and organizations, just to recap, its definition is as follows: a system composed of a large number of smaller intellects, such that the overall performance across many domains reaches superintelligence. Collective intelligence especially excels at tasks which can be broken down into parts and sub-problems.

There are two ways collective intelligence can be improved. Either by improving the quality of each of the smaller units or by increasing the total number of units. The level of any collective intelligence currently is nowhere near the threshold for superintelligence.

Quality Superintelligence

The final form is called quality superintelligence. This is what everyone thinks of when I mention superintelligence; one incredibly smart machine.

However as I mentioned previously this isn’t just any smart, it’s at a sort of level that we won’t be able to understand the goals or reasoning behind its actions, it’ll just be beyond us.

Motive

Same with a lot of things I’ve talked about, there’s a lot of ambiguity surrounding the topic of ‘motive’, however, there are a few myths that I should address. Although a system is super intelligent it won’t be alive in the sense that it has feelings, I mean we can code feelings and how it should react, but it won’t actually feel anything. Therefore any thoughts of revenge, resentment or jealousy or all are not possible.

We need to remember it’s just a machine, and it will do as we say, but there lies the problem. Essentially the machine will receive an instruction from us, and it will try to find a way to do it most quickly and efficiently possible. And if that involves obliterating our planet, it doesn’t care, because as I said it doesn’t have feelings. Therefore the only motive for this superintelligence will be to reach its final goal, so the most important thing is to be extremely careful when handling such a tool.

Some of you may ask, why not just turn it off, and that would be a good question. However, this machine doesn’t want to die, it doesn’t care about dying, but if it does then it won’t be able to complete its final goal.

Controlling Superintelligence

So, controlling this monster. There are 2 ways we can control superintelligence, and the first is capability control. This means limiting what the machine can do. And to do this there are different methods.

Boxing

This can be split into information boxing or physical boxing. Physical boxing actually contains the system so that it can’t interact with the word, only from a designated output. This stops it from being able to hack into devices and do whatever it wants. Information containment restricts the information leaving the box because the superintelligence could persuade ‘us humans’ to free it.

Stunting

“Stunting”, as the name implies, involves hampering or disabling Superintelligence in some way. A good example would be running Superintelligence on slow hardware, reducing its memory capacity, or limiting the kind of data it can process. However, this poses a dilemma. Either we stunt Superintelligence too much and it just becomes another dumb piece of software, or we stunt it too little and it would be capable of overcoming its disabilities. Getting the balance just right could be pretty tricky.

Trip wiring

This is a different method of capability control. It involves building into any AI development project a set of “tripwires” which, if crossed, will lead to the project being shut down and destroyed. Bostrom identifies three types of tripwire:

Behavioural Tripwires: A network of detectors would be placed around a “boxed” AI to detect any attempt to breach the containment, e.g. any attempt to make radio communication.

Ability Tripwires: The Superintelligence would be regularly tested for its capabilities. If at any point new abilities are detected or if it gets close to some capability threshold, it would be shut down.

Content Tripwires: The Superintelligence’s internal processing could be constantly monitored for any suspicious changes, e.g. changes in how it represents its final values.

Tripwires could be useful, particularly during the development phase if used together with other methods. But, they too have shortcomings. An advanced Superintelligence could subvert a tripwire in an unexpected way. Bostrom also notes that project developers working on Superintelligence could grow impatient if tripwires repeatedly hamper their progress.

Domesticity

Now we move on to direct specification tactics. the first one is called domesticity. This is similar to a box method as it severely limits the scope of the AI, but instead of its capabilities, it limits its ability to have complicated motives. This results in it listening to humans.

Augmentation

This strategy involves first guaranteeing safety within a system and then making the ‘safe’ system super intelligent.

Conclusion

Superintelligence is above human-level intelligence in multiple domains and even some domains currently unknown to humans. Paths to superintelligence include whole brain emulation, biological cognition, brain-computer interfaces and networks and organisations. There are three forms of superintelligence: speed, quality and collective. Superintelligence can’t hold resentment, but it can be dangerous, as it will do anything to complete its order in the most efficient way possible. Therefore, we must be very specific in the orders given to the machine. There are two types of containment for superintelligence: capability control and direct specification tactics. These two types of tactics split into even more specific tactics. If you are interested in superintelligence, I have a couple more book summaries on the topic.

--

--

Koza Kurumlu

Student at Eton College, UK | Writing about Physics, CS & AI - also book summaries