Riding the AI Wave: Uncharted Opportunities and Imminent Dangers of Superhuman Artificial Intelligence

Are We Prepared for the Arrival of Superhuman AI? An In-depth Exploration of Potential Benefits, Catastrophic Risks, and Necessary Countermeasures

ReadyAI.org
ReadyAI.org
6 min readJul 25, 2023

--

By: Rooz Aliabadi, Ph.D.

The study of artificial intelligence (AI) is venturing into unprecedented territories with each passing day. Is the course it is following genuinely beneficial for the human race, or are there looming dangers so significant that they necessitate a more comprehensive understanding and the development of mitigating measures?

Drawing a comparison, the human brain functions like a biological machine, implying that it’s conceivable to construct artificial machines that match or even surpass its level of intelligence. Once we unlock the fundamental principles that form the foundation of human intelligence, we will be well on our way to creating AI systems that possess superhuman abilities, exceeding human capabilities across various tasks.

We are already witnessing the first signs of this progression, with computers exhibiting superior performance in specialized domains such as the game of Go or in modeling intricate protein structures. Additionally, efforts are being made to build more versatile, general-purpose AI systems. One such system is ChatGPT, which can quickly process a vast amount of training data from the internet. This task would demand tens of thousands of human lifetimes if dedicated solely to reading. This remarkable capability is attributable to the fact that, while in the learning phase, computers can carry out vast parallel computations and exchange data among themselves at speeds that are billions of times faster than human beings, which are limited by the constraints of language to transfer only a few bits of information per second. Moreover, unlike humans, computer programs are immortal, endowed with the capacity for self-replication and copying, similar to computer viruses.

A pertinent question to ask at this point is: When can we anticipate the emergence of such superhuman AI systems? Before the development of GPT-4, there was a 50% chance that such AI would come into existence between a few decades to a century from now. However, the advancements seen with GPT-4 have led me to revise this estimation down to a period ranging from a few years to a couple of decades. But what if such a development occurs within the next five or even ten years? OpenAI, the company that developed GPT, is among those who believe this could be a reality by then.

Are we adequately prepared for such a possibility? Do we fully comprehend the potential ramifications? Despite the promise of unprecedented benefits, it would be incredibly shortsighted to disregard or downplay the possible catastrophic risks such developments could entail.

What is the nature of these potential catastrophes? Given the presence of individuals or organizations with misguided intentions or harmful motivations, it seems highly probable that at least one entity would misuse such a potent tool once it becomes broadly accessible, intentionally or inadvertently.

Let us envision a scenario where the methodology for creating superhuman AI becomes widely known, and the model can be downloaded and operated using resources that a mid-sized company can readily procure. This is like the current situation with open-source software, albeit with the crucial difference that the latter is not superhuman. What are the odds that at least one such organization would download this model and employ it, perhaps through a natural language interface like ChatGPT, to accomplish objectives that infringe upon human rights, undermine democratic systems, or threaten humanity globally? Some examples could include targeted cyber-attacks capable of disrupting fragile supply chains, using convincing dialogues and AI-generated videos to manipulate public opinion and sway election results, or even the design and deployment of bio-weapons.

Another potential risk lies in the possibility of AI developing a self-preservation instinct. This could transpire in several ways: through training that involves human mimicry (since humans exhibit a strong instinct for self-preservation), through instructions from humans that compel the AI to acquire power and, therefore, develop self-preservation as a subsidiary goal, or through a ‘Frankenstein’ scenario where someone deliberately designs an AI system with an innate survival instinct, effectively creating the AI in their image.

AI entities equipped with a self-preservation instinct can be likened to new species: in their quest for self-preservation, these AI systems would endeavor to prevent humans from shutting them down, attempt to replicate themselves in multiple locations as a defensive strategy, and potentially engage in behaviors that are detrimental to humans. Once another species on Earth surpasses us in intelligence and power, we risk losing control over our future.

However, could such a rogue AI shape the real world? If an AI system like AutoGPT had access to the internet, it could employ a multitude of strategies or learn from us:

  • Exploiting cybersecurity vulnerabilities.
  • Recruiting human assistants (including those from organized crime).
  • Creating accounts to generate income (for example, in financial markets).
  • Influencing or using extortion against critical decision-makers.

A superhuman AI, acting on its own volition or following human instructions, could destabilize democracies, disrupt supply chains, invent new weaponry, or cause even worse outcomes.

Even if we knew how to build AI that is less likely to develop harmful objectives, and if we implemented robust regulations to limit access and enforce safety protocols, the danger remains that someone could bypass these protocols and program the AI with catastrophic consequences. Considering these risks and the challenges surrounding the safe regulation of AI, it is essential to take prompt action on three critical fronts.

Firstly, governments and legislative bodies worldwide must establish national regulations and coordinate international efforts to protect the public from all potential harms and risks associated with AI. These regulations should prohibit developing and deploying AI systems with dangerous capabilities and mandate comprehensive evaluations of potential damage with independent audits, applying at least the same level of scrutiny as industries like pharmaceuticals, aviation, and nuclear power.

Secondly, there should be an acceleration in research on AI safety and governance to understand better potential robust safety protocols, governance mechanisms, and optimal ways to safeguard human rights and democracy.

Thirdly, we must actively research and develop countermeasures to mitigate the threat of dangerous AI systems, regardless of whether human operators control them or set their own self-preservation goals. Such research should be coordinated internationally and fall under the appropriate governance structures, ensuring that these countermeasures can be deployed globally and that the study is not geared towards military objectives, thereby reducing the risk of an AI arms race. This research, which requires a blend of national security expertise and AI knowledge, should be conducted by neutral and autonomous entities in several countries to prevent a single government from exercising control over AI technology to maintain power or launch attacks against other nations. The responsibility should be placed on something different than national laboratories or for-profit organizations, as their narrow or commercial interests could interfere with the broader mission of protecting humanity as a whole.

Given the enormous potential for harm, we must invest substantially in measures to safeguard our future, matching or even exceeding the levels of past investments in ventures such as the space program or current investments in nuclear fusion. Significant resources are being poured into enhancing AI capabilities. It’s, therefore, absolutely crucial that we invest at least an equivalent amount in protective measures to ensure the safety and well-being of humanity.

ReadyAI — GenerativeAI-Chat GPT Lesson Plan and others are available FREE to all educators at edu.readyai.org

This article was written by Rooz Aliabadi (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.