A Brief Analysis of Initial Steps in USA & EU AI Regulatory Frameworks

Celeste Box
8 min readJan 27, 2024

--

After Sam Altman’s departure from OpenAI a few weeks ago, AI’s space was called to pause the development of generative AI, prompting a closer look at the safety and ethical considerations surrounding artificial intelligence. The drama will likely subside, leading to a reorganized OpenAI as a major player in collaboration with Microsoft.

But the existential threat posed by AI, demands taking these concerns seriously and investing more in controlling and regulating AI developments. A balanced approach that acknowledges the challenges and risks associated with AI while promoting continuous innovation and responsible development in a rapidly evolving landscape is key.

Photo by Andrew Neel on Unsplash

When comes about safety concerns, slowing down or halting the development of AI may not be a solution. Instead, we can think of channeling more energy into creating AI systems capable of effective monitoring, as human oversight may struggle to keep up with large-scale systems. Another issue revolves around open- source AI. Of course, OpenAI’s initial commitment to open-source principles is in the center of this discussion (even though, in the practice, open-sourcing a model with 1.7 trillion parameters may hinder comprehension).

Additionally, the debate touches on the intersection of for-profit and not-for-profit motives. The significant investment from Microsoft played a crucial role in OpenAI’s developments. Despite potential concerns about the rapid pace of commercialization, economic incentives naturally push for ensuring the safety of AI systems (because a company focused on selling access to an unsafe foundational AI model is economically unsustainable).

Looking ahead, a landscape where fine-tuned, specialized versions of foundational AI models become the focus of development, the cost and effort required to build these foundational models will lead to increased caution among major players (after all, GPT-4 costing $100 million and taking 100 days with 25,000 Nvidia chips).

Regulatory Challenges and Perspectives

USA

Regulating AI is like trying to shot a cheetah running. You have to be an expert and, besides that, I bet you have to have some sort of luck too. The problem lays in the ability of lawmakers to understand and regulate applied mathematics effectively. Altman’s call for regulatory review at Senate in 2023 should be implemented, establishing new regulatory agencies. By the way: this is not guaranteed success and is not even the best strategy. It´s only the one we have on the table.

Last October, President Biden took a significant step by signing an Executive Order on AI, titled New Standards for AI Safety and Security, highlighting the critical need to govern this transformative technology. It’s crucial to recall that in July 2023, big tech players made some commitments to White House, involving the establishment of watermarking systems, fueling corporate research, addressing societal challenges, and investing in cybersecurity. And also let’s remark that there were forums previously convened in response to an executive order featured influential AI Lab CEOs and civil society leaders, addressing topics such as AI innovation, copyright, national security, transparency, and privacy (by the way: The closed-door nature of these forums has sparked debate, but proponents argue that they provide a conducive environment for genuine learning and discussion, free from political posturing).

The Executive Order encompasses a spectrum of measures designed to enhance the safety and security of AI usage.

One of the critical questions raised during discussions surrounding the executive order is whether a new regulatory framework is required for AI. The unresolved debate centers on whether existing laws can be applied to AI or if an entirely new set of rules must be crafted. This Executive Order is viewed as just the beginning: President Biden decides urging Congress to pass additional legislation for comprehensive AI regulation.

At the Congress, Senate Majority Leader Chuck Schumer taking a prominent role and organizing private AI insight forums, closed-door events intended to educate Congress members on critical AI issues before regulatory decisions are made. Simultaneously, legislative proposals are emerging, broadly categorized into comprehensive frameworks and smaller, specific acts. Senators Blumenthal and Hawley introduced a comprehensive framework, advocating for an independent oversight body to administer a licensing regime for advanced AI models. Their proposal also seeks legal accountability for AI-related harms, national security measures, transparency promotion, and consumer protection. Of course, on te oter side of the coin, these promising contacts between companies and NGO happen meanwhile Washintongs’ lobying increse as well, raising questions about potential conflicts of interest and regulatory capture.

Senate Minority Whip John Thune, along with Democrat Amy Klobuchar, is working on a ‘light-touch’ AI bill, aiming to mitigate risks without imposing heavy-handed regulation. Additionally, several smaller acts address specific challenges, such as the Artificial Intelligence Advancement Act and the Schatz-Kennedy AI Labeling Act. Recent bipartisan legislation includes the Protect Elections from Deceptive AI Act and the Federal No Fakes Act, both targeting deceptive AI-generated content in the context of elections and likeness laws.

Back to the executive order, let’s see the key elements that includes:

  • New safety guidelines intended for AI developers. Developers are required to share AI Safety test results, fostering te creation of standards, tools, and tests to ensure AI’s safety
  • Standards for Disclosure of AI-Generated Content. Mandates the disclosure of AI-generated content to promote transparency and accountability
  • Specific requirements for Federal Agencies integrating AI into their operations, such as protecting against the use of AI to engineer dangerous biological material, and safeguard American’s from AI-enabled fraud.

You may be thinking in problems like facial recognition, bias in AI decision making, and algorithmic discrimination, the executive order addresses concerns by recognizing and tracking algorithmic bias. There’s also a commitment to prioritize the welfare of the workforce where the executive order serve as a stopgap, emphasizing its commitment to ensuring that AI does not lead to job displacement (particularly in industries like manufacturing, where fears of automation are justified).

However, the focus is not solely on protecting vulnerable aspects of the chain; it extends to power — U.S. power. There’s a main interest in maintaining the United States’ leadership in the tech space, particularly AI. In this sense, the national security and competitiveness implications, and of course, the need for strategic measures to keep the States at the forefront. Proposals include leveraging institutions like the National Institute for Standards and Technology (NIST) to establish voluntary standards and facilitating talent retention through visa policies.

UE

On the other side of the Atlantic, the EU adopted landmark legislation for Artificial Intelligence. Their parliamentarians reached a groundbreaking agreement on legislation that will govern the use of artificial intelligence (AI) within its member states.

European Union’s proposal for an Artificial Intelligence Act. Initially presented by the European Commission on April 21, 2021, and later voted on by the European Parliament on June 14, 2022, this legislation is a fundamental step in regulating the rapidly evolving field of artificial intelligence (AI).

The AI Act is a substantial document, spanning almost 90 pages, accompanied by a 10-page summary, and can be accessed through the links in the description below. In this article, we aim to distill the most crucial aspects of the AI Act, shedding light on why it holds significance for individuals beyond EU borders.

The EU is positioning itself in a Regulating Unique Position. The EU establishes itself as the first major regional business area to regulate AI comprehensively, surpassing the United States and Asian markets.

While the AI Act is currently focused on the EU, its potential global influence is evident through the Brussels effect — the tendency for regulations that apply to the European market to influence global standards. As seen with GDPR compliance, companies often adhere to the most stringent standards to cater to a broad market. Companies striving for global market access may adopt the AI Act’s principles, influencing AI usage standards globally.

This comprehensive deal encompasses various aspects of AI, including imposing limits on facial recognition technology and placing restrictions on the use of AI to manipulate human behavior. If the framework seeks to strike a delicate balance — ensuring adherence to fundamental rights and European values while not stifling the development of the AI industry within Europe, is yet to be see. Only when time passes and we see how this framework woks, we will know.

The regulation primarily applies to providers of AI systems established within the EU or in third countries placing AI systems on the EU market. Users of AI systems located within the EU are also subject to the regulation. Notably, military purposes are excluded from the AI Act’s scope.

Key Points

The EU defines AI as:

Software developed using techniques listed in Annex 1, that is capable of generating outputs such as content, predictions, recommendations, or decisions aligned with human-defined objectives.

  • Tough Penalties and Human-Centric Approach: The legislation promises stringent penalties for those who violate the established rules (National-level authorities will oversee the implementation of the AI Act, imposing fines that can range up to 30 million euros or 6% of the total worldwide annual turnover). The Act promotes a human-centric approach to AI development, with a commitment to respecting fundamental rights.
  • Launchpad for European Startups (and researchers), positioning them to lead the global race in AI development. Here, we have regulatory sandboxing, which aims to encourage development and innovation through a controlled environment. This facilitates the testing and validation of innovative AI systems for a limited duration before their market release.
  • Dangers of AI and EU’s Role: The EU identifies potential dangers arising from AI and sees itself at the forefront of a revolutionary shift in business, acknowledging that AI will impact every facet of daily life in the future.

Measures Implemented and Their Implications

Risk Categorization & Classification of AI Applications

The EU categorizes AI applications into four risk classes. Some are forbidden, such as manipulative subliminal techniques or exploitation of vulnerable groups like mass-scale facial recognition (with exemptions for military and law enforcement).

Then we have High-risk applications, like those impacting critical infrastructure, education, and law enforcement, face new rules and certification requirements. those in self-driving cars are allowed but must be certified and open for scrutiny. High-risk AI systems must undergo CE certification registration, complying with safety legislation in their respective fields. The certification process involves adherence to requirements related to risk management, testing, technical robustness, data governance, transparency, human oversight, and cybersecurity. Third-party assessments are mandatory before these systems enter the market.

Medium-risk applications, like chat bots, that are allowed without restrictions, but transparency is mandated (consumers must always know if they are interacting with a machine or a human and also can know how they work).

And finally, unregulated AI applications: like audio and video-altering programs (those that produce deep fakes). They are not regulated, as they are deemed to pose a higher risk in the view of the EU. But they have the transparency obligations just the same (disclosing data sources, benchmark scores, and machine-generated content details).

Of course, there were mixed reactions: Business in general shows its concerns about overregulation and its potential impact on competition, innovation, and the possibility of driving startups away to other regions. Consumer protection groups, say this is not far enough in protecting data, especially in certain AI applications (like toys that could influence children’s thoughts and behavior).

The mixed reactions from both the business community and consumer protection groups highlight the delicate balance that the EU aims to strike with this legislation. Even though Europe positions itself as a global leader in ethical AI development, only time will reveal the impact of these regulations on innovation, competition, and the ethical use of artificial intelligence, and therefore, the true social benefit (or not) that holds those regulations… in fact, that’s also part of the risks we face with AI: Even when we try to do something to intertwine the process, is really difficult to do a good work with a moving target.

--

--