Legislating AI

New US AI Executive Order Reiterates Necessity for Regulations that Mitigate Harm from Artificial Intelligence

Nathan Summers
Luxembourg Tech School
9 min readMar 5, 2024

--

Initially Published November 12th, 2023 on Luxembourg Tech School’s LinkedIn by Nathan Summers and Dr Sergio Coronado

Photo by Growtika on Unsplash

The past year has seen a global explosion in public awareness and interest in Artificial Intelligence (AI). With the release of ChatGPT in November of last year, ordinary people were, for the first time, able to use approachable, comprehensible AI tools and experience firsthand the potential of this technology. Since then, access and attention to AI has only increased.

Recently, this has been accompanied by impassioned pleas for regulatory intervention to ensure that this technology remains helpful and not harmful. As more people become aware of the promises of AI, the risks become apparent as well. The conversation about broader AI adoption is rightfully concerned with issues surrounding algorithmic bias, a lack of explainability and interpretability, the future of the job market, and the misuse or alteration of AI systems. In order to address and mitigate these issues before they become entrenched, comprehensive international legislation is required.

In December of 2022, the European Union signed the EU AI Act. For much of the western public, this was the first visible piece of legislation that sought to define AI and regulate for what it could and could not be used. To date, it is still the most internationally recognized AI legislation. However, significant frontier AI research and development is being undertaken abroad, largely in the United States (US) and China. Thus, the state of AI regulation in these countries remains consequential. In examining alternative approaches, more nuanced and, consequently, more effective AI legislation can be developed.

Until now, the US has been behind the other global AI leaders in enacting such regulation. This is particularly significant given the presence of the major multinational technology companies based in the country actively working on frontier AI projects, such as OpenAI, Google, Meta, etc. On 30 October 2023, President Joe Biden signed an executive order on the safe, secure, and trustworthy development and use of AI. This executive order marks the first federal attempt to legislate the development and use of AI in the US.

However, it is important to note that the executive order differs fundamentally from the EU AI Act. This brief will illustrate the differences between the two legislations, examine the pros and the cons of each, and discuss how these lessons can be applied to improve future AI legislation.

Frameworks for Change

Photo by Suzanne D. Williams on Unsplash

Prior to delving into an extensive elaboration, it is worthwhile to note that the EU AI Act is a preliminary document which forms the basis of an ongoing negotiation between the European Commission, Parliament, and Council in establishing a legally binding law. These might introduce different regulations than are currently present in the EU AI Act, which might provide member states with more guidance and rules than the current document. However, since this is currently under negotiation, this brief will focus on the guidelines as they are outlined in the EU AI Act. With this in mind, the standards set by the EU AI Act establish a tiered system of potential AI risk. The document establishes four primary risk categories, listed below from most significant to least significant:

  1. Unacceptable risk.
  2. High risk.
  3. Limited risk.
  4. Minimal risk.

AI use cases are assigned to one of these risk tiers based on their potential to significantly disrupt people’s livelihoods, privacy, and other basic human rights. For example, the EU outright prohibits the use of AI for real-time biometric crime monitoring and social credit scoring on the basis that these uses fundamentally infringe upon its citizens’ rights. The remaining risk categories have various reporting and auditing requirements that vary based on the potential impact of AI systems. Systems which are involved in fundamental infrastructure, such as healthcare, are subject to rigorous assessment before being put to market, while those used for generative purposes or within entertainment media must only declare that content is AI generated.

This approach is beneficial in that emergent use cases for AI can be categorized in an existing risk category. This means that existing regulations and enforcement tools can be reused without the need for additional bureaucracy. Furthermore, the establishment of risk categories firmly asserts that certain AI use cases are morally unacceptable. This is a strength of the EU AI Act as it ensures that for certain crucial tasks, humans will maintain control over decision making. This is possible because the EU AI Act establishes what is referred to as a “general approach policy”. It focuses on establishing a guiding standard and framework, based on an initial moral assertion regarding acceptable use of AI. This is valuable as it clearly and unequivocally demonstrates the EU’s commitment to a human-centric approach to AI legislation.

Conversely, the US approach, as outlined in the Executive Order, is more specific. It lacks the general applicability of the EU AI Act, focusing on a wide range of individual use cases instead of broad categories of risk.

This means that the US approach is less restrictive. It does not establish certain use cases which are outright prohibited. Instead, it leaves the determination of what should be allowed, what should be prohibited, and how this should be enforced to the relevant federal agencies. These include the Department of Commerce for commercial matters, the Departments of Defense and Homeland Security for defense and security matters, the National Institute of Standards and Technology for technical matters, etc. Consequently, the focus of the US AI Executive Order is threefold: establishing areas of concern, designating relevant oversight agencies, and mandating a timeline in which regulations must be developed.

The US AI Executive Order highlights areas of concern that are more tangible than the risk categories outlined in the EU AI Act. For instance, the Executive Order addresses the potential for AI to assist in the development of weapons of mass destruction and tasks relevant agencies with developing strategies and regulations to minimize this risk. It also identifies the potential for AI adoption to impact the job market, and seeks to address these challenges, among other specific areas of concern.

Concurrently, the Executive Order targets sectors that are distinct from AI development yet are fundamental to its future. For example, it lays out strategies to ensure that the US remains competitive in the research and development of frontier AI systems. These include alterations to the visa application and immigration process for AI experts. Furthermore, it tasks agencies to address the risks and potential benefits of open-source and widely available models. While these issues are not directly related to how AI can be used, they remain significant considerations for the further development of AI.

Conversely, the EU AI Act does not discuss avenues to ensure that the EU technology industry remains competitive in the development of frontier AI systems. It is likely the case that this is considered to be outside the scope of the EU AI Act and should either be carried out on a per-country basis, or through other EU funds and institutions. Instead, the EU AI Act focuses primarily on consumer AI models that are brought to market. This differs from the US AI Executive Order’s consideration of non-market uses of AI.

The EU AI Act does not provide specific guidance on how to conduct AI assessments. This responsibility is left open to member states, who are responsible for ensuring that their national laws comply with the EU AI Act. This approach will leave open for member states to tailor their AI assessment procedures to specific AI use cases while adhering to the overall principles and requirements of the EU AI Act.

Herein lies another potential strength of the general approach framework adopted by the EU AI Act. This approach allows for several perspectives to “compete” and, through collaboration, may lead to the emergence of a collection of best practices for regulating, assessing, and enforcing the development and use of AI systems. However, it may be the case that instead of a single, consistent best practice emerging, competing norms develop across the EU that make research and development within Europe more challenging.

Similarly, it is important to note that the Executive Order does not outline any specific conclusions or regulations. It provides the individual agencies with the opportunity to reach their own conclusions on their prescribed tasks.

This can be seen as a strength of the Executive Order, as it does not mandate a political conclusion and instead allows domain experts to make their own determinations. However, this approach also means that there is a less rigorous commitment to human rights, as compared to the EU AI Act. To illustrate, the final US approach might not forbid social credit scoring or real-time biometric surveillance, should the federal agencies decide that the predicted benefits outweigh the predicted risks. This weakens the US AI Executive Order, as it keeps the door open to allowing potentially immoral use of AI.

As discussed above, neither document lays out exactly how these standards and frameworks should be enforced. While this may seem counterintuitive, it is important to consider that these documents are starting points for more granular AI legislation in the future. The resultant legislation that arises from these documents will require closer scrutiny and will be what will actually impact the use and development of future AI systems.

Distinct Philosophies

Photo by Kenny Eliason on Unsplash

Comparing the two documents can be helpful to identify different philosophies and approaches to AI legislation, however they should not be considered equivalent. Fundamentally, the two documents set out to achieve different goals:

The EU AI Act reiterates the EU’s commitment to digital privacy and represents a more cautious approach to AI legislation. It firmly establishes that certain use cases for AI are acceptable while others are not. Primarily, its focus is on the legal and commercial use of AI systems within the EU and does not impact the research and development of AI systems.

On the other hand, the US AI Executive Order identifies a wide array of potential problem areas associated with AI. These constitute areas outside of commercial and legal use cases and focus heavily on the technical challenges inherent to regulating an emergent technology. Significantly, it does not establish a position on acceptable and unacceptable uses of AI. Instead, it simply makes a broad commitment to ensure that AI is used for general social betterment.

Conclusions and Recommendations

Photo by Floriane Vita on Unsplash

These documents, although distinct in their approaches, share the common goal of ensuring the responsible development and use of AI technology. The EU AI Act takes a cautious stance, delineating acceptable and unacceptable AI use cases, emphasizing digital privacy, and focusing on legal and commercial applications within the EU. In contrast, the US AI Executive Order addresses a broad spectrum of AI-related challenges, concentrating on technical complexities, and identifying tangible solutions. The convergence of these philosophies is essential for effective AI legislation. Striking a balance between defining specific acceptable uses, safeguarding fundamental human rights, upholding digital privacy, and fostering technical understanding is crucial. As the dialogue around AI regulation continues, it is imperative to synthesize the strengths of both approaches. Only through such collaboration and careful consideration can comprehensive and effective AI legislation be achieved, guiding the responsible evolution of this transformative technology.

Building from this, it is recommended that future AI legislation takes firm stances on acceptable and unacceptable AI use cases, as demonstrated in the EU AI Act. However, industry and academic experts should be involved in the development of regulatory and enforcement tools. This must be done so that it is technically feasible to uphold any regulation that is drafted. Furthermore, it is imperative that any AI legislation takes into consideration not only the legal, market-oriented uses of AI, but also the multitude of other use cases that exist outside the legal market. Beyond this, regulating AI must involve international cooperation and agreement, as there no longer exist entirely distinct markets for information technologies. Finally, care should be taken to ensure that regulation strikes an appropriate balance between appropriately protecting citizens and stimulating the further research and development of frontier AI systems.

--

--