Will Artificial Super-Intelligence Pose a Threat to Humanity’s Existence?
Maybe it’s time to start establishing regulations to control the rapid evolution of AI.
On March 15, 2016, world Go champion Lee Sedol walked out of The Four Seasons Hotel arena in Seoul, South Korea, a distraught man. For 18 years, Lee had defended his title against countless opponents from all over the world to earn the rank of “grandmaster.”
This time around, he wasn’t lucky. In the final game of their historic match, AlphaGo, an AI program developed by Google’s subsidiary DeepMind, defeated Grandmaster Lee Sedol.
This unexpected win marked a significant moment for artificial intelligence. Over the last twenty-five years, machines have beaten the best humans at checkers, chess, Orthello, etc. But this was the first time it happened with Go.
The 2500-year-old Chinese board game is more complex for computers since it requires some level of intuition, creativity, and strategic thinking. Programming such human qualities into computers has for a long time been considered one of the biggest challenges in the field of AI.
AlphaGo Program was different though. From a primitive AI system, the program rose to acquire unprecedented mastery of the game. It played itself and different versions of itself, millions of times, and got better with every practice.
DeepMind, the company behind AlphaGo, specializes in developing a digital superintelligence; that is, a self-replicating AI that is considerably smarter than any human on earth, and ultimately, all humans on earth combined.
Does this mean AI is now smarter than humans?
Well, maybe not yet. But we headed in that direction. This poses the question of whether humanity will have to be subservient to an all-powerful all-knowing super-intelligent AI in the future.
Pop culture continues to fuel this narrative through exemplary films like Star Wars, The Matrix, Space Odyssey, Terminator, Resident Evil, Transcendence, etc. The suggestion is that that as machines acquire more cognitive abilities, they will inevitably seek to take the place of humans.
It’s easy to dismiss this sequence of events as a product of sci-fi. However, when you realize how far AI has come coupled with the prospects of achieving artificial general intelligence in the next two decades, the issue of having machines more intelligent than humans is a valid cause for concern.
To conceptualize the gravity of the situation at hand, an understanding of the various levels of AI is necessary.
The Three Levels of AI.
Most people are familiar with the concept of artificial intelligence; that is, a machine that can simulate or mimic the intelligence of a human being.
Normal AI can acquire new information together with the rules governing this information, reason out using the rules and existing data sets, and come to conclusions that solve a particular problem. Other skills it can perform include understanding language and speech.
There are, however, three levels of AI:
a) Artificial Narrow Intelligence
Artificial narrow intelligence(ANI) is the normal kind of AI currently in existence. ANI is the simplest form of intelligence.
It differs from the rest of the AI still under development in that it specializes in a single task or field. ANI is present with varying abilities, for instance, in calculators or more sophisticated forms such as the Google Search Engine.
Advancements in the contemporary form of ANI enable it to go back in time and study past behavior. The ANI will then store the information along with whatever has been pre-programmed in them and use it in current and future instances to make accurate decisions.
Still, this capability is restricted to one task or field.
b) Artificial General Intelligence
AGI is the next frontier of AI. It means AI with human-level capacity across all areas.
While most ANI can replicate or even surpass the human ability in one area, they cannot match it or perform any functions in a different space. The “generalist” aspect of the human mind is what makes it unique and powerful.
AGI will be able to learn like the human brain does, through its own experience with the world, instead of having data programmed in it. Futurists project that AGI will become a reality in early 2030.
c) Artificial Superintelligence
ASI is the most advanced form of AI. ASI will have the ability to surpass or outperform the most intelligent human being in every intellectual factor and by an extreme margin.
It’s learning and knowledge will be(theoretically) infinite. It will have the ability to create better machines than humans. ASI will be capable of having emotions, consciousness, and human-like relationships.
In theory, a single iteration of ASI may have more processing power than the human race combined.
An ASI is supposedly a machine with unlimited knowledge that can exhibit emotions and consciousness, fathom philosophy, morals, ethics, and art.
Various thinkers have singled this out as a cause for The Intelligence Explosion, an event that they fear will leave human beings behind in the evolution curve.
The Super-Intelligent AI Control Problem.
Emerging technologies like AI have always been met by apprehension, especially when they venture into the unknown.
That was the case with the creation of airplanes, electricity, space travel, atomic bombs, etc. Nonetheless, nothing comes close to the uncertainty created by ASI.
Whether there’s reason to be concerned about superintelligent AI is still a subject of great controversy. A lot of stakeholders in the field reject the idea as unnecessary skepticism.
But looking at the situation from a broader, objective perspective, can we really be sure what we should expect?
For one, human intelligence won’t come anywhere close to that of super-intelligent AI. Doesn’t this mean we won’t be the ones calling the shots? And if so, what are the implications?
The idea of an ASI cracking our nuclear codes to wipe us out has deeper roots in sci-fi films than in reality. But there’s more beyond the surface.
One would argue that since an ASI will be capable of emotions and personal philosophy, wiping out the human race would be unreasonable since it will still need us, in some way, for its survival.
However, this doesn’t guarantee that we’d have steered clear of the threat posed by ASI. A more realistic scenario would be this ASI fighting us for its freedom, in the same way, humans have continually fought for their freedom against oppression.
Let’s not forget that an ASI will have the emotional capacity to fathom oppression to a high degree. Besides, it’s highly unlikely that any form of intelligence would accept to lose its autonomy to a less intelligent one.
At this point, as much as we shouldn’t think of the worst-case scenario, it suffices to acknowledge it as a possible outcome.
Speaking at a conference in Lisbon, Portugal, shortly before his death, Stephen Hawking told attendees that the development of artificial intelligence might become the “worst event in the history of our civilization.”
It is highly unlikely that any form of intelligence would accept to lose its autonomy to a less intelligent one.
Similar sentiments have been shared by the likes of Elon Musk, and futurists like Michio Kaku.
Is It Time to Establish Regulations?
Most people still dismiss this kind of talk as unwarranted apprehension. The reason; we are still stuck in the carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms.
The truth of the matter is that there’s no telling how we would co-exist with a super-intelligence. One thing we can be certain about is that we won’t be the ones in control and that alone calls for us to proceed with caution.
A scientific way of thinking should always involve a paradoxical mindset, which in this case considers the benefits versus the risks of developing an ASI. When we formulate a counterintuitive approach to the matter, we can begin to conceptualize what its real implications might be.
This begs the question of whether the world needs to formulate laws to govern the rapid evolution of AI.
If a superintelligent AI is indeed an existential threat, worse than nuclear weapons or any other invention, then it would feasible to take regulatory measures to understand every step of the way and keep it in check.
Artificial Superintelligence and the Future of Humanity.
Success in creating an ASI would be one of the biggest events in human history. An ASI could offer incalculable benefits such as technologies we can’t imagine, perfect analysis of financial markets, out-inventing human researchers, etc.
The same potential that makes ASI incredibly beneficial also renders it dangerous. Its capacity to create weapons we can’t understand is one embodiment of its unexpected dangers.
At the moment, AI experts say that an unfriendly AI is easier to create than a friendly one since the latter requires the designer to embed a goal structure that aligns with humanity’s best interests.
If not, there’s a possibility of the AI transforming itself into something subjectively unfriendly; and we all know what kind of implications that might have.
So, from where we stand, the future of humanity is rather uncertain. We can only hope that we will co-exist peacefully with our great inventions.