A Clear Guide to Understanding AI’s Risks

How Exactly Can It Lead to the Doomsday Scenarios?

Ansh Juneja
11 min readNov 15, 2023
Source

Suppose we were told that a fleet of spaceships with highly intelligent aliens has been spotted, heading for Earth, and they will be here in a few years. Suppose we were told these aliens might solve climate change and cure cancer, but they might also enslave or even exterminate us. How would we react to such news?

Yuval Noah Harari, historian and author of Sapiens

It’s clear that AI can help us solve some of humanity’s biggest challenges in the next few years. It could help us eradicate disease, reduce material poverty, and mitigate the worst effects of climate change. But at the same time, if it also causes our financial systems to collapse, our democracies to perish, and result in humans losing control over our world, what is the point of those benefits?

Instead of another vague description of how risky this technology is, this essay aims to provide a clear explanation of exactly how AI can lead to a catastrophic loss of control of our society within the next few years. It describes the risks that matter right now, and why these risks are more harmful and likely than most people realize. As we are building this technology at an accelerating pace, it is important to understand what world we could be heading towards if we do not take care to mitigate its risks.

Half of AI researchers now believe that there is a 10% chance that this technology will lead to the extinction of humanity. This is not due to evil robots rising up to kill us, but due to very specific scenarios occurring within the next decade if we continue developing AI at our current pace:

  1. Economic collapse in major sectors
  2. Democratic failures in developed nations
  3. Releases of new pandemics worse than COVID-19

This essay explains how AI makes these likely, and what we are doing about these risks today.

I am writing this because despite recent regulations, AI labs are continuing to race forward in the pursuit of even more capable and dangerous AI systems. We are far from safeguarding humans from these risks — we need more public understanding of these issues in order to actually make policymakers act.

If you were boarding a plane, but were told by the engineers that there was a 10% chance that it would crash and kill everyone on board, would you get on? Humanity is boarding that plane right now.

This essay is divided into 4 parts — you are currently reading part 1. The other parts are linked below:

I) What Is Intelligence? (what you are reading)

II) Risk 1: The Misalignment Problem

III) Risk 2: Societal Impacts

IV) What Are We Doing About These Risks?

I) What is intelligence?

First, it’s important to understand what AI really is, and how it is different from anything else humans have ever created.

The following 3 things all have something in common:

  1. A human playing chess
  2. A dog jumping through a hoop
  3. A Roomba autonomously vacuuming your floor

They are all exercising intelligence. What is it specifically about these things that makes them intelligent? Is it their ability to move? Their ability to learn things? To remember things? There is no widely accepted definition for intelligence — but for our purposes, it is simply the ability to solve problems.

Humans, animals, and even some non-living things can solve problems. Some things can solve more problems than others — a human can solve a lot more problems than a dog, so we consider humans to be more intelligent than dogs.

Artificial intelligence (AI) is anything that can solve problems, but is not biologically alive. For most of our planet’s history, all intelligence has been restricted to biological life. There was no non-living thing that could ever solve problems. But 70 years ago, this started to change when humans started using computer technology to build systems that could solve problems on their own — the field of AI was born. Since then, we have been able to build forms of non-biological intelligence that can recognize images, play chess, and even drive cars on public roads.

Most systems we build are only intelligent in a narrow way, which means they can only solve a very small set of problems. If we try to use these systems to solve other kinds of problems, they would not be able to do so. For example, a chess-playing AI would not be able to go to the grocery store and pick out ingredients for a butter chicken recipe.

Note: we shouldn’t confuse intelligence with sentience or consciousness, which is the ability to feel, think, and experience the world. Something can be intelligent without feeling or experiencing anything, such as the AI systems we are building. It might be true that for animals, intelligence goes hand-in-hand with sentience, but this does not mean that all intelligent systems are sentient.

For about 65 years since the field of AI began, everything we created was narrow intelligence. But 6 years ago, a powerful new innovation started to blow away our restrictions for building smarter AI systems.

General intelligence

On June 12, 2017, a team of researchers from Google Brain released a short paper, claiming that they had discovered a new way for AI to translate languages. Little did they know, they had stumbled upon something which would change the field of AI forever, and alter the course of human history.

The breakthrough achieved by this lab meant that humans could now start building systems that solved a huge variety of problems, and were not so narrow in their intelligence.

The most famous and recent example of this is ChatGPT, an AI model that can have conversations with you, generate code, write essays, solve mathematical problems, recognize objects in images, and perform many other tasks. ChatGPT was not built specifically to do any of these things — when OpenAI first started building this system, they wanted to build an AI that could replicate human language perfectly. But in the process of doing this, they quickly realized that this system, as it grew, was able to do many other things that they did not expect.

These surprising new capabilities have led many people to believe that GPT-4 is one of the first generally intelligent systems that humans have built.

But can we say it is approaching the intelligence of humans? How do you even compare intelligence between different things? We discussed earlier about how humans are more intelligent than dogs because they can solve more problems than dogs. Let’s try to apply this here — if system A can solve more problems than system B, then we can say system A is more intelligent than system B. We can represent this visually:

The orange circle represents all the problems that humans can solve, and the blue circle represents all the problems that dogs can solve. We know that humans can solve a lot more problems than dogs can, but there are still some problems that only dogs can solve, like sniffing for cocaine in airport luggage; this is why the dog circle covers some area that the humans circle does not.

Now, if we were to put our most advanced AI system onto this diagram, what would it look like?

As we mentioned before, GPT-4 is able to perform an incredible range of tasks that can reasonably compare it to a human for many things. Obviously, there are still many things it cannot do, like play soccer, make a sandwich, or care for an elderly person. These are mainly the result of this model not being connected to any physical systems, but this is also starting to change. It’s feasible that these restrictions could be eliminated within the next few years. You could make an argument that this system‘s intelligence looks like this today:

This is an important checkpoint. Today, we have already built an AI system that can compare to the abilities of some of our pets, and in many cases, the abilities of humans. What could our most advanced AI system look like in 5 years? What about 25?

Superhuman intelligence

Biological intelligence has evolved for billions of years to reach its current level of problem-solving abilities:

How does this compare to the growth of artificial intelligence?

In about 70 years, artificial intelligence has reached about 30–50% of the “intelligence level” that it took biological intelligence about 4 billion years to achieve.

Side note: these forms of intelligence are obviously quite different, and the problems they are optimized to solve have been shaped by different things, but despite these differences, their intelligence level is quickly converging.

What are the odds that artificial intelligence will overtake human intelligence in the next 70 years?

OpenAI, the company that created ChatGPT, recently stated in a blog post:

Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

According to OpenAI, the intelligence level of AI could look something like this in 10 years:

By the 2028 presidential election, humans may no longer be the most intelligent beings on the planet.

An independent professor at UC Berkeley created his own projection of what GPT could look like in 2030, and found the following results:

  • GPT2030 will likely be superhuman at various specific tasks, including coding, hacking, and math, and potentially protein engineering
  • GPT2030 can “work” and “think” quickly … estimate it will be 5x as fast as humans as measured by words processed per minute
  • GPT2030 will be trained on [formats] beyond text and images, possibly including [data] such as molecular structures, network traffic, low-level machine code, astronomical images, and brain scans. It may therefore possess a strong intuitive grasp of domains where [humans] have limited experience, including forming concepts that [humans] do not have

OpenAI truly believes that they will soon be able to create a superhuman intelligence — this is shown by the fact that they have explicitly accounted for this possibility in the financial documents they present to investors:

“Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create artificial general intelligence, all financial arrangements will be reconsidered. After all, it will be a new world from that point on. Humanity will have an alien partner that can do much of what we do, only better. So previous arrangements might effectively be kaput.”

What OpenAI Really Wants

An important landmark will be the point when AI can start improving itself — when this happens, humans will no longer have control over how quickly it grows. If an AI system becomes more intelligent than us, it can start improving itself much quicker than any human could, and its intelligence will start to increase exponentially.

“AI power will grow steadily until one AI system reaches the threshold of self-improvement, at which point it will quickly outperform others by many orders of magnitude.”

Alexey Turchin & David Denkenberger

The point where we reach superhuman intelligence could happen in 5 years, or it could happen in 25. But the key point is, it is almost certainly going to occur within our lifetimes. You will be alive when humans are no longer the most intelligent beings on the planet.

So, is this a good thing? Is it a bad thing? What does it mean for us?

Types of Impact

Superhuman intelligence can be used to fulfill utopian dreams — curing cancer, addressing climate change, preventing war, and many other things that were previously limited to science fiction.

But a superhuman intelligence also makes many other things possible it might not actually be the best idea to bring everything that was once limited to science fiction to life, as that could lead to some destructive outcomes. Specifically, there are some catastrophic outcomes that this technology can help bring about.

As we mentioned before, if we knew that what we were building had even a 10% chance of driving humanity to extinction, how differently would we approach it? These catastrophic outcomes are what I want to explain in this essay, because understanding these clearly is critical to making good decisions on what we should be doing next.

These “bad” outcomes can roughly be placed into two categories, and they form the organization of this essay.

The first category focuses on AI being misaligned with human goals. We are building a superhuman intelligence, and soon, we will let it make decisions on its own. What are the impacts of letting this technology act autonomously? Would we still be in control of it, despite it being smarter than us? How do we make sure it doesn’t do something which harms humans as it acts out in the world? These questions have led researchers at major AI labs to resign and sound the alarm about our future.

The second category focuses on societal risks. Humans use technology to do things they were already doing, but do them better and faster. This includes the work we do in our jobs, the way we communicate to other humans, and how we conduct war between nations. What does it mean to integrate a superhuman intelligence into all these systems? Should we do so?

Lastly, the essay ends with a section describing what we are doing today about these risks. Are we taking them as seriously as we should? What can be done, if anything, to prevent these scenarios from occurring?

Note: this essay focuses on the most important risks this technology poses, rather than every single risk. Currently, the discussion about this is too scattered and not focused on the real dangers. Instead of worrying about how AI can help students do their homework, we need to have a discussion about its ability to disrupt the organization of our society — if we have any chance of preventing these scenarios from occurring.

--

--