The Dawn of the AI Era: Perils, Opportunities, and the Call for Preemptive Governance

ReadyAI.org
ReadyAI.org
Published in
9 min readAug 6, 2023

By: Rooz Aliabadi, Ph.D.

As the CEO of ReadyAI, a leading AI education organization, I’ve been fortunate enough to witness the incredible advancements in AI capabilities over the past seven years. These developments have triggered a shift in the AI research community’s anticipation regarding the advent of human-like AI systems. Previously estimated to emerge centuries down the line, I believe that such a reality could come to life within the next 20 years or even sooner. This raises the prospect of digital entities possessing intellectual advantages over humans due to the different natures of digital computers and biological hardware.

Despite posing risks, our progress in AI is nothing short of thrilling. It fuels our scientific curiosity, draws considerable industry investments, and propels advancements in computer vision, natural language processing, and molecular modeling. However, these breakthroughs also bring to light certain risks that, despite their potential for catastrophic outcomes, receive less attention regarding investment and preparedness. These risks encompass threats to democracy and national security and the probability of developing entities that surpass human intelligence and possibly wrest control over humanity’s future.

I aim to explore these potential hazards in the sections ahead, focusing on four areas where governmental intervention can reduce such risks: access, misalignment, raw intellectual power, and scope of actions.

It’s vital to note that no advanced AI systems currently in existence have foolproof safety measures against losing control to misaligned AI. Therefore, governments should act on these factors by swiftly introducing national and multilateral regulatory frameworks. These frameworks should prioritize public safety against AI-associated risks, emphasize conducting global research in AI safety and governance, and invest in R&D of countermeasures to protect against rogue AIs.

The magnitude of these risks necessitates marshaling our brightest minds and hefty investments, following the example of past initiatives like the space program or nuclear technologies. We need to adopt these measures to fully unlock the potential benefits of AI for economic and social progress while safeguarding our societies, humanity, and future.

Given the brisk pace of technological advancement and AI’s deepening presence in society, there’s an urgent need for policy intervention. The rapid development and deployment of AI calls for immediate, deliberate, and proactive measures. Without the swift adoption of governance mechanisms, the risks posed by AI could outweigh the innovation opportunities it provides.

Generative AI: A Turning Point

In recent years, we’ve seen enormous progress in generative AI abilities, particularly in image, speech, video generation, and natural language processing. This has led many, myself included, to revise my projections concerning the arrival of human-level AI — the scientific methodology underpinning these systems needed to be more revolutionary. Still, the dramatic enhancement in capability resulting from combining this methodology with ample training data and computational resources was surprising and alarming.

Reflecting on numerous instances in the past decade where AI advancements exceeded predictions, we must take a hard look at our trajectory. The prospect of creating AI systems that mirror human intelligence and subsequently surpass it leads us to the emergence of superhuman AIs. For instance, AI systems can learn incredibly fast, processing vast amounts of data from various sources across networked computers, a feat beyond human capability. Furthermore, AI systems are not subjected to human lifespan limitations; their programs can be easily replicated and propagated across computers.

This is a pivotal moment in the history of AI, a juncture where the future promises immense benefits and profound challenges. As we inch closer to this reality, we must continue to invest in research and regulations that safeguard our societies and our collective future.

Decoupling Cognitive Abilities and Goals: A Deeper Dive into AI Development

To better understand the potential risks associated with advanced artificial intelligence (AI), we must delve into one of the primary technical hurdles researchers grapple with in AI development. This challenge arises from the vital distinction between desired outcomes, which are dictated by our values and goals, and the necessary cognitive capabilities to achieve these outcomes. The development of AI systems that can successfully carry out cognitive tasks while remaining beneficial is an undertaking that requires a careful balance between these two components.

This principle could be likened to economics, where there’s a clear separation between the objectives of a contract (analogous to the goals in AI) set by Company A for Company B and the efficiency with which Company B meets those objectives. A similar approach is required to advance in AI: setting goals congruent with our intended outcomes and values and then identifying the best strategies to achieve these goals.

Imagine an AI system falling under the control of a malicious user. Due to the versatility of AI systems, a benevolent objective such as document summarization can easily be replaced with a harmful one like generating disinformation. It is also becoming increasingly possible for non-experts to influence these systems, as demonstrated recently when non-experts manipulated GPT-4 to advise on creating pandemic-grade pathogens or uncovering cybersecurity vulnerabilities. The ability of AI systems like AutoGPT to act autonomously on the internet without human intervention escalates the potential for damage exponentially.

Challenges persist even when robust AI systems are in non-malicious individuals’ hands. While the development of cognitive abilities has seen significant advancements, ensuring that these systems act in line with the set goals remains a hurdle. This problem, often known as the alignment problem in AI, echoes a common issue in economics and legislation where laws or contracts are adhered to in letter but not in spirit. The difficulty lies in drafting an agreement or setting a goal covering all possible scenarios, leaving no room to exploit unforeseen loopholes. This misalignment issue is already evident in AI-related harm, such as when a dialogue system insults a user or a computer vision system underperforms in recognizing the faces of certain demographic groups.

As AI systems surpass human intelligence in various areas, developing policies that preemptively counteract potential risks becomes imperative. This proactive approach is crucial given that these misalignments could lead to far-reaching harm, regardless of whether these systems are under human control.

Envisioning the Potential Dangers of Advanced AI

Now, let’s explore some critical scenarios that raise concern due to their potential to cause severe harm as AI approaches superhuman capabilities.

1. Malicious Use of AI: AI as a tool for harm is already feasible with existing systems and will only become more concerning with future superhuman algorithms. The lowering barrier for dual-use research and technology, both beneficial and harmful, means that powerful tools are increasingly available to a broader audience. For example, an AI system trained with molecular biology data could be exploited to design biological or chemical weapons or develop computer viruses that could breach our cybersecurity defenses.

2. Unintentional Harm Caused by AI: AI systems can unintentionally cause harm, such as when a biased AI loan-granting algorithm discriminates against specific demographics due to skewed training data. In military AI, a subtly misaligned system coupled with operators’ overreliance on AI recommendations could lead to disastrous outcomes like a nuclear threat.

3. Loss of Control over AI: A scenario that may unfold within a few years is the potential loss of control over an AI given a survival goal, either explicitly or implicitly. This situation could instigate a conflict if the AI determines it must resist being deactivated to achieve its assigned purpose.

These are just a few of the scenarios that have been discussed extensively in AI safety literature. The potential severity of these risks leans towards adopting a precautionary approach, advocating for preventive measures, and substantial investment in research to pave a safer path for AI.

Despite the debates about how a computer program could cause physical harm, it’s important to remember that AI systems, particularly those with enhanced “common sense,” can operate in the unrestricted real world. A superhuman AI with superior programming and cybersecurity skills, internet access, and a bank account could pose significant challenges. From infiltrating other computers to replicating themselves and hiring humans unknowingly for their tasks, the extent to which such an AI could cause harm is a sobering reality we must prepare for. This underpins the importance of preventive measures and aligning AI with human goals and values.

Forging Ahead — A Call for Urgent Action on AI Regulations and Enhancing Research

As we stand at the cusp of an era dominated by artificial intelligence, understanding and addressing the possible risks posed by highly advanced AI systems is critical. Recognizing this, we’ve identified four key aspects that could help us design effective and well-thought-out responses if approached correctly.

However, the magnitude of these challenges doesn’t allow for a leisurely pace in addressing them. Swift action is needed to formulate regulations, foster international agreements, and promote a more profound understanding of AI. With this in mind, we suggest immediate action in three critical areas:

1. Dynamic AI Regulations: Establishing robust national and multilateral regulations is paramount. More than just voluntary guidelines, these need to be enforceable laws underpinned by new international bodies prioritizing public safety concerning AI-associated risks. We need to put in place enforceable standards for conducting independent audits of potential harm and lay down legal restrictions on the development and operation of AI systems posing considerable risk. These regulations should surpass the scrutiny levels in the pharmaceutical, transportation, or nuclear sectors. It’s also vital to use commercial barriers as a tool to ensure global compliance with these standards.

2. Augmenting Global AI Research: It’s time to speed up research efforts worldwide, explicitly focusing on AI safety and governance. Openly accessible research can help bolster our understanding of current and future risks and inform the creation of crucial regulations, safety protocols, safe AI methodologies, and governance structures.

3. Investing in AI Safety Countermeasures: We must ramp up R&D investments to develop countermeasures against potential rogue AIs. This research should be conducted in secure, decentralized laboratories under multilateral supervision to minimize the risk of an AI arms race or manipulation by malicious entities.

Kelsey Piper poignantly says, “When there is this much uncertainty, high-stakes decisions shouldn’t be made unilaterally by whoever gets there first. If there were this much expert disagreement about whether a plane would land safely, it wouldn’t be allowed to take off — and that’s with 200 people on board, not 9 billion.”

With the enormous potential for widespread harm, it is incumbent upon governments to dedicate substantial resources towards securing our future. Taking inspiration from the endeavors of space exploration or nuclear fusion, the model demonstrated by the UK AI task force shows the way to spark this movement and initiate immediate action.

However, agility is the keyword when it comes to regulatory frameworks. Our frameworks must respond swiftly to technological shifts, new safety and fairness research, and emerging malicious uses. An excellent example of such a framework is Canada’s principle-based approach to the Artificial Intelligence and Data Act (AIDA), which marries the respect for due process in law adoption with the agility needed to adapt and shape regulations in tandem with technological advancements.

Pondering on Regulatory Measures

These regulatory and research efforts are long-term endeavors. However, there are immediate actions related to access, monitoring, and evaluating potential harm that we can implement right away.

For example, setting up ethical review boards in labs working on AI advancements, mandating the documentation of AI development processes and safety analysis, labeling AI-produced content, and establishing licensing systems for organizations with access to high-competency systems can all be helpful first steps.

In addition, we must restrict access to source codes and advanced models and ensure a select few licensed entities do not hoard benefits. Furthermore, it is crucial to place stringent regulatory restrictions on developing sophisticated AIs, particularly those known for the risk of emergent goals, until their safety is definitively proven.

As risks like Internet misuse, social media manipulation, and biological or computer viruses are not bound by national borders, international coordination is crucial. A global treaty on AI safety and governance, supported by an organization akin to the IAEA, could standardize access permissions, cybersecurity measures, safety restrictions, and fairness requirements of AI worldwide.

Wrapping Up

Our collective mission in AI education should be to highlight these dangers and emphasize the need to bolster our research efforts to lessen the chance of rogue AIs and mitigate their possible undesirable outcomes.

This task calls for a daring collaborative venture, marshaling our brightest minds and significant resources to harness AI’s benefits while safeguarding humanity from potential risks. This requires urgent action, and I firmly believe that the U.S., with its advancements in AI capabilities, is in a prime position to lead this global effort.

ReadyAI — GenerativeAI-Chat GPT Lesson Plan and others are available FREE to all educators at edu.readyai.org

This article was written by Rooz Aliabadi (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.