When Artificial Intelligence Surpasses Human Intelligence

Mars Xiang
5 min readOct 3, 2020

--

Machines will follow a path that mirrors the evolution of humans. Ultimately, however, selfaware, self-improving machines will evolve beyond humans’ ability to control or even understand them. — Ray Kurzweil

Artificial Intelligence has impacted almost every aspect of human life, leaving large amounts of manual jobs to computers, and allowing humans to pursue intelligent jobs. However, in the future, there may be a time when intelligent jobs are also done artificially.

Even more unsettling is the possibility that machines will become more intelligent than humans. Although there is little consensus on this topic, such an event is almost certain to happen. We can only hope that we have found a way to make AI “friendly”, or share humans interests beforehand.

Intelligence Explosion

An intelligence explosion is a hypothetical future event in which a self-aware AI repeatedly improves its own intelligence — the more it improves, the faster it improves.

A narrow AI would seek to accomplish a single goal in a specific way, but a self-aware AGI is different. For a self-aware AI to accomplish goals, an intelligence explosion is likely to happen. If an AI has any goals of its own, then to avoid wasteful, repetitive mistakes, four fundamental subgoals will be developed: efficiency, self-preservation, resource acquisition, and creativity.

  • Efficiency refers to the goal of making the most of an AI’s resources, including time, space, energy, or matter.
  • An AI would want to keep itself alive as long as possible, so it can make sure it achieves its goals. This is the concept of self-preservation.
  • Resource acquisition is also another fairly obvious subgoal, since the more resources it possesses, easier its goals become.
  • The last subgoal is the ability to consider different methods of achieving its goals. For example, if a self-aware AI wanted to win a game of chess against a narrow AI, it might decide that because it does not know its opponent’s skill level, the best way to win would be to disrupt the opponent’s software.

Each of these four subgoals contribute to the AI’s ability to further achieve these subgoals again, and thus, an aware AGI will pursue an intelligence explosion.

If an intelligence explosion were to ever happen, there would be no way of knowing what would happen. An intelligence many degrees of magnitude higher than humans’ would surely be able to accomplish any goals it has, and humans would then become irrelevant.

Human Intelligence

If any self-aware AI were to develop those four basic drives, why a human unable to trigger an intelligence explosion? How exactly would an artificial brain be different from our own?

While humans follow the four basic subgoals somewhat well, we are so inefficient that by the end of our lives, nothing much is done. The reason for this can be traced back to nature.

The human brain has evolved to keep itself alive, not to recursively improve itself, while an artificial brain can be designed for any objective. In the same way an airplane is much more efficient than a bird, an artificial brain can operate much more efficiently and precisely than a human can.

Another advantage for artificial brains are that both their hardware and software can be changed. We could replace a deficient structure with a new one by simply replacing code, or give it a hardware boost by adding processors.

Barriers to AGI

Funding

There are two main arguments for the implausibility of an intelligence explosion: economics and software complexity.

Humans will not stop developing AI just because it is dangerous. Countries and companies will keep developing this technology because they think that it is better that they develop it before others do.

The cold war was a period of tension between the United States and Russia that humanity was very fortunate to come out of alive. Although both countries knew they were capable of initiating global nuclear war, neither country stopped increasing their nuclear power. In the same way, a global AI arms race is currently happening.

Because of this, defense organizations such as DARPA (Defense Advanced Research Projects Agency) invests millions of dollars into AI research, and the profits from the AI services developed also contribute to research. There is not only funding for AGI, but also for short-term narrow projects, which are applicable today, and will be components for general intelligence.

Other private “stealth” companies are also racing to develop self-aware AI. These companies usually reveal their end goal, but do not tell others about their progress or methods.

Another reason why economics will not be a burden is that when AGI seems within reach, a plethora of investment money will push it over the edge. Governments would recognize this as a crucial technology, and spend large sums of money to develop it first.

Complexity

The other barrier to AGI development is software complexity. Will a human brain ever be able to understand itself, and will an AI system ever be able to understand its own structure?

Like any other subject, the study of computational intelligence can be broken down into small, incremental steps. Once a few theorems are discovered, advancing further would become easier.

A few centuries ago, calculus took a large amount of time and effort for mathematicians, but now, any high school student can do fairly reasonable calculus work with the theorems they are taught. Nature has been taking incremental steps in creating humans. By trying out what works and what doesn’t, it created human intelligence, and in turn, we can create artificial intelligence.

Even if AGI proves to be too hard a challenge for humans, there is the alternative of reverse-engineering the human brain. Further optimizations, such as increasing speed and memory, and connecting it to another intelligence like Google, would still allow it to achieve greater than human intelligence.

Conclusion

The problems encountered when building self-aware AI, funding and software complexity, will not cause AI development to cease to keep moving forward. General intelligence will evolve to artificial superintelligence as a result of its subgoals. While we do not know exactly when this will happen, we must make sure that friendly intelligence is invented before general intelligence is.

--

--