2024: The Year AI Governance Faces Its Toughest Challenges

ReadyAI.org
ReadyAI.org
Published in
5 min readJan 16, 2024

Navigating the Uncharted Waters of Technological Advancement and Global Risks

By: Rooz Aliabadi, Ph.D.

In 2024, shortcomings in AI governance will become increasingly apparent as regulatory attempts struggle, technology companies continue to operate with minimal regulations, and significantly more potent AI models and tools proliferate outside governmental oversight.

The previous year (2023), we witnessed a wave of ambitious AI projects, policy statements, and the establishment of proposed new standards marked by collaboration across unexpected alliances. Major AI companies in the United States committed to adhering to self-regulated standards during a summit at the White House. Key players, including the United States, China, and most G20 nations, supported the Bletchley Park Declaration, focusing on AI safety. The White House introduced a revolutionary executive order on AI. The European Union reached a consensus on its highly expected EU AI Act. Additionally, the United Nations formed an advisory committee on AI.

However, the pace of breakthroughs in artificial intelligence is outstripping the advancement in governance. In 2024, four main factors will worsen this disparity in AI governance:

  • Political Dynamics — As frameworks for governance are established, disparities in policy and institutional views will lead to a reduction in their aspirations. The compromise reached will be the minimum that governments can agree upon politically and that technology companies do not perceive as restricting their business models. However, this will likely fall short of what is necessary to address AI risks adequately. Therefore, this will display a fragmented approach to evaluating foundational AI models, a need for more consensus on managing open-source versus closed-source AI, and an absence of mandates for assessing AI tools’ impact on various societies before deployment. While the proposition of an Intergovernmental Panel on Climate Change (IPCC) )-like body for AI is a positive step towards a unified global scientific understanding of the technology and its societal and political ramifications, it will require time to develop. Moreover, just as the IPCC alone hasn’t resolved the issue of climate change, such an institution won’t singlehandedly “solve” AI safety risks.
  • Institutional Inertia- The focus of government bodies is limited. As AI ceases to be the topic of the moment, many leaders will shift their attention to issues with more immediate political relevance, such as conflicts and global economic concerns. Consequently, the vital sense of urgency and prioritization needed for AI governance efforts will likely diminish, especially when their implementation demands significant government compromises. Once AI governance loses this spotlight, it will likely require a considerable crisis to bring the matter back into the central focus of policy discussions.
  • Strategic Defection — Up to this point, significant players in AI have agreed to collaborate on AI governance, with technology companies voluntarily committing to standards and safeguards. However, as AI technology progresses and its substantial benefits become increasingly apparent, the temptation of geopolitical leverage and commercial gains will encourage governments and corporations to abandon or disregard the non-binding commitments and frameworks they have previously endorsed to optimize their benefits. This motivation may even lead some to choose not to participate in these agreements from the outset.
  • Rapid Technological Advancements — AI’s evolution is progressing briskly, with its capabilities estimated to double approximately every six months — triple the rate of Moore’s Law. GPT-5, the upcoming iteration of OpenAI’s large language model, is scheduled for release this year, yet it’s expected to be swiftly superseded by the following, currently unimaginable, breakthrough within months. As AI models grow exponentially in capability, technological advancement surpasses the ability to regulate and manage them effectively in real time.

This leads to the fundamental dilemma in AI governance: Addressing AI issues is less about regulating the technology itself, as it is already too advanced for feasible containment, and more about comprehending the business models fueling its growth. The key lies in curbing the motivations (such as capitalism, geopolitical strategies, and human creativity) that drive AI towards potentially hazardous paths. In this regard, current governance mechanisms need to catch up. The outcome is akin to an AI “Wild West,” similar to the relatively unregulated social media domain, but with an even higher risk of harm.

For 2024, two particular risks are prominent. The foremost is disinformation. In a year where around four billion individuals will participate in elections, both domestic and international players, especially Russia, are expected to employ generative AI to sway electoral campaigns, fuel societal divisions, erode confidence in democratic processes, and instigate political turmoil at an unparalleled level. Western societies, already profoundly polarized and where voters increasingly rely on information from social media bubbles, are especially susceptible to such manipulation. Today, a crisis in global democracy is more likely to be triggered by disinformation generated and propagated by AI and algorithms than by any other factor.

Looking beyond elections, AI-generated disinformation will intensify ongoing geopolitical conflicts, including those in the Middle East and Ukraine. For instance, Kremlin-backed propagandists have already utilized generative AI to disseminate fabricated narratives about Ukrainian President Volodymyr Zelensky across platforms like TikTok and others. Some Republican lawmakers even referenced these stories as grounds for opposing additional U.S. aid to Ukraine. The previous year, they also witnessed rampant misinformation surrounding the conflict between Hamas and Israel. While much of this misinformation occurred without the assistance of AI, the technology is now set to become a key factor influencing rapid policy decisions. Fabricated images, audio, and video, propagated on social media by swarms of AI-driven bots, will increasingly be employed by various combatants, their supporters, and agents of chaos. This will be done to manipulate public opinion, undermine authentic evidence, and escalate geopolitical tensions globally.

The second pressing risk is the proliferation of AI. While the United States and China have primarily controlled the AI landscape to date, 2024 will see an expansion of new geopolitical players, encompassing both nations and corporations, gaining the ability to develop and acquire cutting-edge AI technologies. This includes state-supported large language models and sophisticated intelligence and national security tools. Simultaneously, the rise of open-source AI will boost the capabilities of non-state actors to create and utilize novel weaponry, increasing the likelihood of unintended incidents. However, it’s important to note that this trend also paves the way for unprecedented economic opportunities.

AI is comparable to a “looming iceberg,” where its potential benefits are more easily anticipated than potential risks. The impact of AI on markets and geopolitics this year may be uncertain, but its substantial influence in the future is inevitable. The longer AI goes without effective governance, the greater the chance of a systemic crisis and the more difficult it will become for governments to adapt and catch up with necessary regulations.

This article was written by Rooz Aliabadi, Ph.D. (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.