The Future of AI: AGI & Superintelligence

Koza Kurumlu
8 min readNov 6, 2022

“The world will know what you want before you want it, and have it ready for you when you want it.”

Sounds like science fiction, but AI is bringing us closer to this utopia than we ever imagined. What does AI mean? I’m sure you know what it stands for, but what is AI, really? And why is it different from other algorithms?

An algorithm is a set of instructions, it’s a coded recipe. The process might include some decision-making, such as: ‘If the cookie is a golden colour, take it out of the oven’. However, these instructions are hard-coded. There is no learning taking place. On the other hand, an AI algorithm does learn. If we look at its definition: AI is a group of algorithms, that can modify its algorithms and create new algorithms in response to learned inputs and data as opposed to relying solely on the inputs it was designed to recognise via conditionals. The definition describes it perfectly, it’s the ability to change, adapt and grow based on new data which defines “intelligence.”

Now, I want to introduce a new term: AGI, short for artificial general intelligence. What does AGI mean? Currently, an AI algorithm can play chess, a different algorithm can drive cars, another one can recognise faces, and a whole new one can convert French into Russian. All these algorithms are extremely specified in their respective niches, as in they can only do one thing. The algorithm which drives cars can’t just learn how to play chess without being amended, however, humans are able to learn in different domains, i.e. humans can learn to drive a car and then learn to play chess. Getting back to Artificial General Intelligence, its aim is to get close to human intelligence. This implies that it will be able to learn a variety of things, without being restricted to a single task. This is also known as human-level AI. This is significantly different from what we’ve experienced in the real world so far. The closest example to AGI is personal AI assistants, like Alexa or Siri.

AGI will cause a massive step up in the world’s average quality of life, however, it’s not the end. The ultimate endpoint would be Superintelligent AI. This is a system that rapidly increases its intelligence in a short time, specifically, to surpass the cognitive capability of the average human being. It is a step up from n AGI, as it is no longer at the human level but inconceivably more intelligent. To be able to understand the scale, think of how much more intelligent we are compared to a worm. That is roughly how much smarter a superintelligence will be compared to us. This would be the pinnacle of all science, as in theory, this superintelligent system will have all the answers for us.

So how far away are we from this sci-fi dream? Well, first we would have to calculate how far away AGI is because it is highly likely a superintelligence will evolve from AGI, or be created with the help of AGI. A survey conducted by two dozen researchers within the field of AI, shows that there is a 50% chance of AGI arriving before 2050 and a 90% chance of it arriving before 2095. That’s not too far away. After AGI, how long will it take for a superintelligence to arrive? The same group of researchers concluded that there is a 50% chance of superintelligence arriving only 2 years after AGI and a 75% chance of it arriving 30 years after AGI.

Now let’s discuss some possible paths. First off, the intention is not to give you a blueprint for superintelligence. All we’re doing here is going through possible paths which can be taken:

  1. Whole brain emulation
  2. Biological cognition
  3. Brain-computer interfaces
  4. Networks and organizations

Whole brain emulation

In this method, intelligent software would be produced by scanning and closely modelling the computational structure of a biological brain, and making this brain function on the hardware of a computer. So how would this work? Firstly, you would need to scan a brain very intricately, and you would need to stabilize it post-mortem. But to be able to scan it, you would need to cut it into extremely thin slices and have the slices scanned; the scans will then be generated into a 3D structure. This structure will then be hooked up to a powerful computer which can enable this ‘brain’ to either live virtually or in the physical world via robotics. There are problems with this method though:

  1. Microscopy is not at a high enough standard to fully capture all the important details in scans at a high enough resolution.
  2. Handling these microscopic layers of tissue is difficult.
  3. Storage and structuring of the data of this 3D model is complicated.
  4. How could it be ensured that it functions in the right way?
  5. Is there big enough computer power to simulate a living, thinking brain?

These problems highlight how theory is developing much quicker than hardware, so we seem to have solutions, which at first seem genius, but with a lot of complications in the physical world.

Biological cognition

This method would enhance the intelligence of human beings themselves. In theory, superintelligence doesn’t need a machine, it could be done with selective breeding, but as you may imagine that would come across many moral and political hurdles.

However, with a small tweak it might work. Consider natural selection but on a gamete level. First, one would need to genotype embryos, then select those which have favourable characteristics. After that, extract stem cells from those embryos and convert into sperm and ova. Then cross the new ova and sperm to produce new embryos which are even better than the last. Repeat this process until there are large genetic changes. This process can go through dozens of generations in just a few years, therefore, speeding up the procedure and cutting expenses at a huge rate. With this method, evolution can be utilized to reach superintelligence.

Brain-computer interfaces

This path suggests that humans should exploit the pros of computers, such as high processing power, data transmission etc, which is usually done by implanting a chip into a person’s brain. This method of implantation is being explored by Elon Musk’s Neuralink company. This sounds like it would give humans a boost, but in my view it is unlikely to reach superintelligence, as in the current world humans are using computers anyway. All it would do is to speed the interaction between the human and the computer. There are some other problems, as well:

  1. Brain implantation is dangerous, and even when done properly can cause a human to lack behind in other things, such as verbal pronunciation. This was seen when some people with Parkinson’s disease were implanted with a chip to help with muscle stimulation.
  2. The brain might not be able to interact properly with the computer, rendering the whole process useless.
  3. Coming back to my first point: it is unnecessary. It’s not worth the risk for only a tiny bonus. We already use computers and the speed of interaction will not bridge the gap between current human intelligence and superintelligence.

Networks and organizations

The next method explores a way of reaching superintelligence via the gradual enhancement of networks and organisations. This idea in simple terms means linking together various bots and form a sort of superintelligence called collective superintelligence. This wouldn’t increase the intelligence of every single bot but rather collectively it would reach superintelligence.

As an analogy, think of how much humans have developed together over the centuries. Collectively, we have reached a standard of intelligence that is higher than every single individual person. And now imagine this, but on a machine level. The technical side of this hasn’t really come together yet, but the frontrunner of a nice experiment example would be the internet. Just think of how much data and information is stored there, all of it is unexploited, could the internet just ‘wake up’ someday? I am not sure, but sadly it’s unlikely.

Now I would like to address a few myths surrounding the ‘motive’ of a terminator-like AI system. Although a system is super intelligent, it won’t be alive in the sense that it has feelings. Therefore, any thoughts of revenge, resentment or jealousy or all would not be possible. We need to remember it’s just a machine, and it will do as we say, but there lies the problem. Essentially the machine will receive an instruction from humans, and it will try to find a way to do it in the quickest and most efficient fashion possible. If that involves obliterating our planet, it would not care, because it doesn’t have feelings. Therefore, the only motive for this superintelligence will be to reach its final goal, so we need to be careful when handling such a tool. When providing instruction or objectives, one must be extremely particular, and there must be ground rules set which include moral issues as well.

Some of you may ask, why not just turn it off when it’s not behaving properly, that would be a good question. However, this machine would not want to be turned off or ‘die’, it would not care about dying, but if it does then it won’t be able to complete its final goal, and since that would hinder it from completing its objective, this superintelligence will take any precaution necessary to not be turned off.

But how can we control it then? There are 2 ways we can control superintelligence, the first is ‘capability control’. This means limiting what the machine can do. And to do this there are different methods:

  1. Boxing: Devising a system so that it can’t interact with the World, only from a designated output. This would stop it from being able to hack into devices and do whatever it wants.
  2. Stunting: Involves hampering or disabling Superintelligence in some way, e.g. running the superintelligence on slow hardware or reducing its memory capacity.
  3. Trip wiring: Building into any AI development project a set of “tripwires” which, if crossed, will lead to the project being shut down and destroyed, e.g. any attempt to make radio communication.

The next type of control is ‘direct specification’ and this has two methods:

  1. Domesticity: This is similar to box method as it severely limits the scope of the AI, but instead of its capabilities it limits its ability to have complicated motives, so it results in listening to humans.
  2. Augmentation: Starting off with a program with good motives then making the program super intelligent. So you first guarantee safety, then go for improving its cognitive abilities.

The ideal utopian future with an obedient superintelligence could be clear to grasp but what about the dark side: the doom of humanity? I talked about motives previously, and there are ways to solve them, but this is another danger, which could be inevitable. Let me paint a scenario:

  1. First-mover advantage implies that the AI is in a position to do what it wants
  2. The orthogonality thesis implies that we have no idea what the AI could want because our cognitive abilities are inferior, and even if it says one thing it may be lying.
  3. Instrumental convergence thesis implies that regardless of its wants, it will try to acquire resources and eliminate threats.
  4. Humans have resources and may be threats.
  5. Therefore, an AI in a position to do what it wants is likely to want to take our resources and eliminate us, i.e. doom for humanity.

But let’s try to look on the bright side.

--

--

Koza Kurumlu

Student at Eton College, UK | Writing about Physics, CS & AI - also book summaries