Balancing Innovation and Responsibility: Predicting U.S. AI Governance Strategies

Centric Consulting
Centric Tech Views
Published in
4 min readJun 18, 2024

By Joseph Ours, Centric Consulting Director, AI Strategy and Modern Software Delivery

A photo of a wooden gavel in front of a bookshelf
As AI technology continues to evolve and become more widely used, the U.S. will likely model its regulations on European ones.

The U.S. needs to keep pace as governments worldwide move on artificial intelligence (AI) regulations. Here’s what U.S. lawmakers need to consider as they navigate potential AI laws.

The U.S. approach to AI governance at a federal level has been slower than its European counterparts. The European Union (EU) approved the world’s first AI regulations to govern the technology as it rapidly spreads into every part of our lives. The framework bans some applications and limits use on others. It outlines requirements for a range of AI uses, from simple chatbots to AI used in critical infrastructure or medical devices.

Today, AI regulations in the U.S. are a patchwork of ideas and guidelines, but there is no substantive blanket regulatory response. Standalone laws or AI-related provisions in broader acts include the National Artificial Intelligence Initiative Act of 2020, the AI in Government Act, and the Advancing American AI Act. However, there is still no broad Congress-passed legislation regulating AI.

Here’s what the administration and legislators will need to consider as they work to preserve civil and human rights while still embracing innovation and AI advancement.

Balancing Innovation and Regulation

Given the strong capitalist drive in the U.S., lawmakers will likely emphasize fostering innovation and maintaining a competitive edge in AI technologies. This could lead to a somewhat permissive regulatory approach, particularly to avoid stifling innovation.

Legislation might focus on enabling rapid development while ensuring safety and ethical standards. For example, similar to some of the views expressed by tech executives, the U.S. could initially adopt a framework that emphasizes voluntary commitments and industry-led standards rather than immediate strict regulations.

Sector-Specific Regulations

The U.S. might introduce sector-specific regulations for AI, similar to those under the EU AI Act, based on the sector’s risk level. High-risk sectors like healthcare, finance, and critical infrastructure might see stricter regulations that focus on transparency, data integrity, and accountability to protect consumer rights and safety.

Data Privacy and Personal Liberties

Given the renewed interest in personal privacy in the U.S., any AI legislation will likely include some protections for personal data. However, due to the U.S.’s capitalistic tendencies and the fact that law often lags behind technology, it’s unlikely to be as robust as the EU’s protections.

However, some elements might mirror some aspects of the EU’s General Data Protection Regulation (GDPR) in terms of consent, data minimization, and the right to explanation, particularly for decisions made by AI that directly affect individuals.

Case law, as filed and ruled upon, will likely drive many of the AI protections that develop over the next 10 years — especially regarding protected classes. Deepfakes that impact companies or politicians are likely to drive this as well.

Transparency and Disclosure Requirements

Transparency could be a major component of U.S. AI regulation. This could involve requirements for companies to disclose their AI systems’ design, intent, and capabilities to users, especially for general-purpose AI and AI involved in content creation like deepfakes.

The U.S. might also require detailed documentation and audit trails for AI decision-making processes, particularly in high-risk applications. This is supported by various White House statements and declarations.

Ethical Guidelines and Standards

Given the history of the Patriot Act, and despite its controversy related to surveillance, the U.S. is unlikely to establish ethical guidelines and standards to guide the development and deployment of AI systems that could enable mass surveillance or other practices deemed to infringe on personal freedoms and rights. There may be limited protections, but broad-scale prohibitions are unlikely without a precipitating event.

Oversight and Enforcement Mechanisms

Considering AI technologies’ complexity and potential impact, the U.S. might create a new regulatory body or empower existing agencies to oversee AI development and enforce AI regulations. This body could conduct risk assessments, oversee compliance, and handle violations. This is the biggest unknown, but it could happen.

Compatibility with Global Standards

As AI technologies often cross borders, the U.S. will need to engage with international bodies to develop global standards and ensure that U.S. regulations are compatible with those in other major markets, especially the EU. This will facilitate cooperation and prevent conflicts in multinational operations.

As AI technology continues to evolve and become more widely used, the U.S. will likely model its regulations on European ones. Until then, business owners and consumers alike will have to navigate the uncertainty.

Centric Consulting is an international management consulting firm with unmatched in-house expertise in business transformation, hybrid workplace strategy, technology implementation and adoption. Founded in 1999 with a remote workforce, Centric has established a reputation for solving its clients’ toughest problems, delivering tailored solutions, and bringing deeply experienced consultants centered on what’s best for your business.

--

--

Centric Consulting
Centric Tech Views

Centered on what’s best for you, we create tailored solutions to solve your toughest business and technology problems.