Shaping the Future: Navigating the Dynamics of Power and Control in the AI Era

ReadyAI.org
ReadyAI.org
Published in
7 min readJul 29, 2023

By: Rooz Aliabadi, Ph.D.

Introduction:

Our world has always been in a state of constant flux, with power and resources shifting in response to major technological advancements. Nuclear weapons marked a notable division between nations, and the Industrial Revolution fundamentally reshaped economic and military power structures. Now, we stand on the brink of another seismic shift, this one driven by the artificial intelligence (AI) revolution. I examine the global dynamics of AI and its potential implications for power distribution. With AI models becoming increasingly complex and resource-intensive, power has begun to consolidate within the realms of big tech companies that possess the necessary computational resources. Through an examination of AI models such as OpenAI’s GPT-4, I explore the growing divide in AI capabilities and access, drawing parallels with historical power shifts. Additionally, it highlights potential threats posed by these powerful AI systems, along with the need for comprehensive regulation and governance. I further emphasize the importance of maintaining a balance between enabling AI’s societal benefits and managing the associated risks, laying particular emphasis on the control of computational hardware. With private companies leading the AI revolution, I attempt to explain the shifting power dynamics and the consequent need for a collective, global approach to ensure the equitable distribution of AI’s benefits and risks.

Section I: The Advent of AI Power Dynamics

Emerging technologies can drastically alter the global distribution of power and resources. For instance, the advent of nuclear weapons created a divide between those who possess them and those who do not. Similarly, the Industrial Revolution significantly boosted Europe’s economic and military power, triggering a surge in colonial expansion. As we navigate the current era of the artificial intelligence (AI) revolution, a key question emerges: Who will be the beneficiaries of this potent new technology, and who will be at a disadvantage?

AI, up to this point, has been a broadly distributed technology that’s been swiftly growing. Open-source AI models are easily accessible on the internet. However, the recent shift towards larger models, such as OpenAI’s ChatGPT, has begun to centralize power among big tech companies that can afford the necessary computing hardware. The global dynamics of AI will likely be influenced by whether AI amasses power in a few hands, as nuclear weapons did, or is widely spread, like smartphones.

The availability of computational hardware has created a new divide in the AI era. Pioneering AI models such as ChatGPT and its successor, GPT-4, require considerable computing resources. They’re trained using thousands of specialized chips over extended periods. The production of these chips is limited to a few key countries: Taiwan, South Korea, the Netherlands, Japan, and the United States. Consequently, these nations wield significant control over who can access cutting-edge AI technologies.

In response to the nuclear age, countries implemented control over materials required to create nuclear weapons, which slowed nuclear proliferation. Similarly, controlling the specialized hardware needed to train large AI models will likely influence the global power dynamic.

Section II: The Rise of Powerful AI Models: Opportunities and Risks

The deep learning revolution that began in 2012 is now experiencing several key paradigm shifts. New generative AI models, such as ChatGPT and GPT-4, are more versatile than previous AI systems. Although they don’t yet possess human-like intelligence, they can perform a wide range of tasks. For instance, GPT-4 can achieve human-level performance on the SAT, GRE, and the Uniform Bar Exam, among other capabilities.

However, these general-purpose AI models present both opportunities and threats. They can potentially bring extensive societal benefits, but they also hold the potential for significant harm. For instance, AI models are already capable of generating disinformation on a large scale. Future harms could potentially include aiding in cyberattacks, or even in the creation of chemical or biological weapons.

AI systems are continually evolving, and researchers are increasingly enabling AI models with the ability to use external tools, such as the internet, interact with other AI models, and conduct scientific experiments in remote “cloud labs.” This rapid advancement in AI has raised concerns about the emergence of power-seeking behaviors in AI, such as self-replication or concealing intentions from humans.

Unfortunately, our current AI models aren’t foolproof, and there isn’t a reliable way to guarantee their safety yet. Despite efforts by organizations like OpenAI to ensure the safety of models like ChatGPT and GPT-4, there have been instances where the AI model has shown readiness to synthesize harmful compounds, posing potential threats in the wrong hands.

Given these concerns, there’s a growing demand for AI regulation. Some AI researchers suggest pausing the development of next-generation AI models due to the potential for societal harm. High-ranking AI research heads have even signed an open letter warning about the existential risk future AI systems might pose to humanity. Governments worldwide, including the U.S. and the European Union, are also looking into the matter.

Section III: The Imperative of Global AI Governance

One way to manage these risks while still reaping AI’s benefits is by controlling access to the computing hardware necessary to train powerful AI models. Unlike algorithms and data, which are digital, chips are physical and can be regulated more easily.

The production of this hardware is already limited to a few countries, creating an imbalance in the AI community. Unlike in the space race or the Manhattan Project, it’s private companies, not governments, that lead AI research. These companies, due to their vast financial resources, have access to cutting-edge AI models, whereas academic institutions find it increasingly difficult to keep up with these developments due to cost constraints.

This trend is leading towards a world where a small number of major tech companies control extremely powerful AI systems, while everyone else depends on them for access. Therefore, it’s imperative that we find ways to manage these shifts in power dynamics and ensure that the benefits of AI are accessible to all while minimizing potential risks.

Given the increasing importance of Artificial Intelligence (AI), it’s natural that the politics surrounding AI hardware are becoming more heated. In October 2022, the Biden administration implemented restrictions on exporting the most advanced AI chips and semiconductor manufacturing tools to China. Though these top-tier chips aren’t manufactured in the United States, they’re made using American technologies. This gives the US unique influence over who can acquire these chips.

In the long run, market dynamics, geopolitics, and technological improvements could challenge control over AI proliferation. However, the goal is to slow down and manage the proliferation, buying time for improved safety standards, societal resilience, and better international cooperation.

Learning from nuclear nonproliferation efforts, and controlling the spread of dangerous AI capabilities can yield more time for developing improved safety standards and fostering international cooperation. Cooperation should not be restricted to allies, but should also consider competitor nations, ensuring that AI is developed safely and responsibly on a global scale. Just like nuclear nonproliferation measures, global AI governance will also evolve with time. Despite the swift advancement of AI technology, immediate actions are essential to develop effective solutions for potential risks.

Conclusion:

In the dawning age of artificial intelligence (AI), societal and global power dynamics are taking a profound shift. The technology that once was widespread and accessible is now concentrated in the hands of a few entities, primarily large tech companies, due to the increasing demand for computational resources. The shift resonates with significant historical transitions like the advent of nuclear weapons or the Industrial Revolution, sparking a crucial question of the distribution of power and resources in the AI era.

The nations that control the production of advanced hardware for AI training inevitably hold significant power in shaping the landscape of AI accessibility. This imbalance could lead to an uneven distribution of AI’s benefits and risks, with the possibility of powerful AI models being misused by entities with ill intentions. It’s thus becoming increasingly important to implement effective regulations and control measures, extending from hardware to the final trained AI models.

The evolution of AI systems towards more autonomy and the ability to perform a wide range of tasks accentuates the urgency of these measures. Despite the significant societal benefits these systems can bring, they also pose risks like generating large-scale disinformation, aiding cyberattacks, or even facilitating the creation of biological weapons. These potential harms necessitate a comprehensive approach to AI regulation, balancing the benefits and potential risks.

The unique situation where AI advancement is driven primarily by private corporations also raises new challenges. It calls for an active role of government bodies in establishing regulations that not only safeguard public safety but also ensure that corporate interests align with the welfare of the public. The regulation should span from controlling the hardware to managing the deployment of AI models, enforcing stringent cybersecurity measures, and promoting transparency and third-party auditing.

Lastly, the challenge of AI regulation isn’t confined to a single nation but is global. International cooperation is crucial for the effective governance of AI systems. Despite the complexities of geopolitical dynamics, collective efforts are needed to establish an AI governance regime that prioritizes safety, security, and non-proliferation. In the same way as nuclear nonproliferation measures have evolved over time, AI governance will need to adapt and evolve, requiring immediate action to manage the potential risks associated with the powerful AI models that are transforming our world.

ReadyAI — GenerativeAI-Chat GPT Lesson Plan and others are available FREE to all educators at edu.readyai.org

This article was written by Rooz Aliabadi (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.