AI’s Global Challenge

ReadyAI.org
ReadyAI.org
Published in
4 min readNov 2, 2023

Balancing Innovation, Risks, and Regulation in the Age of Intelligent Machines

By: Rooz Aliabadi, Ph.D.

Governments across the globe are currently facing a complex and unprecedented challenge: the rapid advancement of artificial intelligence (AI) and its potential impacts on society, security, and the economy. As these technologies evolve at an incredible pace, there is an increasing sense of urgency to establish appropriate regulatory frameworks to mitigate potential risks and harness the benefits of AI. However, governments must approach this task with caution and deliberation, taking the time to thoroughly understand the nuances of AI before rushing into regulatory action.

In the United Kingdom, a summit delves into the “extreme” risks associated with AI. However, it is essential to acknowledge that there is still a significant level of uncertainty surrounding what these risks entail. Some technologists and experts in the field harbor genuine concerns that AI could pose existential threats to humanity. These fears are not baseless; there are plausible scenarios in which AI could surpass human intelligence, leading to unpredictable and potentially catastrophic outcomes. Additionally, there are concerns about using large language models (LLMs) like ChatGPT, which could empower malicious actors to create advanced cyberweapons and dangerous pathogens.

Given these potential threats, we must engage in thoughtful and comprehensive discussions about the future of AI. Policymakers worldwide are already contemplating measures to address the challenges posed by AI, with the European Union working on a comprehensive AI Act and the White House just issuing an executive order explicitly targeting LLMs. The British government is also taking steps to address these issues, convening a summit to bring together world leaders and tech industry executives to discuss the potential risks associated with AI.

While governments must acknowledge the transformative power of AI and take steps to address credible threats, it is equally important that they avoid acting hastily. Rushed regulation could result in global rules and institutions that need to be more suited to address the challenges posed by AI, potentially stifling innovation and hindering progress. History has shown us that regulators have been slow to act in the past, as seen with the regulation of social media in the 2010s. There is a desire to be proactive this time around, but we must balance this with a careful and considered approach.

The notion that AI could drive humanity to extinction remains speculative. We need a clearer understanding of how such a threat might materialize, and there are no established methods to assess the risks associated with AI. Extensive research and analysis are required before we can set standards and develop effective regulatory frameworks. Some tech executives have suggested the creation of a body akin to the Intergovernmental Panel on Climate Change (IPCC) to study AI and provide guidance on managing its risks.

In addition to the potential existential threats posed by AI, there are also more immediate and tangible risks that require attention. New laws may be needed to govern the use of copyrighted materials in training LLMs and to define privacy rights in an era where AI models are increasingly reliant on personal data. Furthermore, AI can potentially exacerbate the spread of disinformation, presenting a significant challenge for societies worldwide.

A rushed approach to regulation could also have negative implications for competition and innovation. Developing advanced AI models requires significant computing resources and technical expertise, and currently, only a handful of companies can create “frontier” models. New regulations could entrench these incumbents and limit opportunities for new entrants. There is also a risk that a focus on extreme risks could lead regulators to view open-source models with suspicion despite their potential to drive competition and innovation.

Regulators must be prepared to act swiftly if necessary, but they should not be pressured into making hasty decisions that could have long-lasting consequences. We still have much to learn about the direction of generative AI and the risks associated with it. The most prudent course of action for governments is to establish the infrastructure needed to study AI and its potential impacts, ensuring that those working on these issues have the resources they need.

Given the complexities and uncertainties surrounding AI, it may be challenging to establish an IPCC-like body to oversee its development. However, existing bodies could collaborate to address these challenges. Additionally, governments should encourage model-makers to adhere to a code of conduct similar to the voluntary commitments negotiated by the White House. While these commitments are not binding, they represent a step in the right direction, promoting transparency and accountability in the development and deployment of AI.

As AI continues to develop, regulators will gain a better understanding of the risks and challenges associated with these technologies, enabling them to develop more effective and nuanced regulatory frameworks. Eventually, we may see the emergence of regulatory regimes akin to those established for other transformative technologies, such as nuclear power and bioengineering. Achieving this will require time, patience, and a commitment to reflective thinking.

This article was written by Rooz Aliabadi, Ph.D. (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.