The Biden Administration’s Plan to Regulate AI without Waiting for Congress

Adam Thierer
12 min readMay 5, 2023

--

The Biden administration is slowly laying the groundwork for broad-based regulation of artificial intelligence (AI) and the computational economy — and they’re not waiting for Congress to act before doing so. The sweep of the coming political efforts around AI will be far-reaching, involving many different agencies acting in unison to stretch their powers well beyond their statutory authorizations in an attempt to assert greater control over algorithmic systems.

These moves are well-intentioned and focused on ensuring that AI is safe and does not negatively impact various rights. Nonetheless, this effort by the Biden administration to scale up algorithmic controls could have a negative impact on the overall innovative capacity of the United States and our relative technological geopolitical standing relative to China and other nations. The danger exists that America could be about to preemptively surrender its global leadership role on the digital technology front.

Laying the Foundations for Backdoor Regulation

This morning, a handful of top AI CEOs were called into the White House to discuss what they’re doing to fall in line with AI guidelines proposed by the Biden administration. A press release was issued before the meeting even started announcing “an independent commitment from leading AI developers” to advance “responsible AI” by having their systems evaluated “to explore how the models align with the principles and practices outlined” in recent administration statements.

Meanwhile, in a hotly-worded New York Times oped yesterday, Federal Trade Commission chair Lina Khan says, “We Must Regulate A.I. Here’s How.” Her essay practically declares open war against America’s algorithmic innovators, saying the previous digital revolution “came at a steep cost” and she insists policymakers must make sure that “history doesn’t repeat itself” through “race-to-the-bottom business models and monopolistic control.”

The push for backdoor AI regulation is thus well underway.

The Biden administration’s push for greater federal algorithmic control accelerated last October with the release of its “Blueprint for an AI Bill of Rights” and an accompanying list of “Key Actions to Advance Tech Accountability and Protect the Rights of the American Public.” Unfortunately, as I noted in a recent R Street report, the Biden administration’s AI framework, “reads more like a blueprint for aspiring tech critics and trial lawyers who hope to bottle up algorithmic innovations rather than helping to advance them.”

The AI Blueprint opens with a litany of frightening claims about how algorithmic systems are “unsafe, ineffective, or biased,” “deeply harmful,” “threaten the rights of the American public,” and “are used to limit our opportunities and prevent our access to critical resources or services.” The document continues on like that for more than 50 pages, constantly stressing possible dangers over potential opportunities. Worst-case scenarios and fear-based thinking dominate this document and many administration statements on AI. The profound benefits associated with AI and algorithmic systems are treated as mostly of secondary concern.

Meanwhile, Khan’s FTC has been busy issuing a series of recent blog posts in which agency officials have hinted that they’re ready to bring the hammer down on algorithmic innovators. While the agency has broad authority to police markets for “unfair and deceptive practices,” those consumer protection powers have mostly been focused on ex post enforcement following a formal investigation. The agency’s recent statements suggest a move toward more aggressive and immediate actions on the AI front. Other agencies appear ready to act as well. Last week, Khan and the heads of the Civil Rights Division of the U.S. Department of Justice, the Consumer Financial Protection Bureau, and the U.S. Equal Employment Opportunity Commission released a joint statement saying that they expect algorithmic systems to “turbocharge fraud and automate discrimination,” and that they’d be looking to take preemptive steps to address it.

Meanwhile, Biden’s Department of Commerce also recently launched a new investigation into “AI accountability,” asking for public comments about how to implement one increasingly popular governance mechanism: Algorithmic audits and impact assessments. These would involve some sort of formal evaluation of algorithmic systems either before or after release, which is basically what today’s White House announcement says big AI firms have already “voluntarily” agreed to.

In sum, a lot of potential AI regulation is potentially on the way, despite the fact that Congress has not yet passed any legislation authorizing the broad expansion of authority that many officials in the Biden administration apparently seem ready to undertake unilaterally.

How Voluntary is Voluntary?

At least thus far, much of what the administration has suggested in terms of algorithmic governance has been billed as “voluntary” guidelines and principles, and some of it has been quite sensible in nature. In January, the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework, which the agency notes is a voluntary set of guidelines “designed to be responsive to new risks as they emerge” instead of attempting to itemize them all in advance. NIST notes that “[t]his flexibility is particularly important where impacts are not easily foreseeable and applications are evolving.” NIST observed that some of the risk and benefits of AI are well known, assessing the degree of actual harm associated with some of the negative impacts can be challenging due to measurement issues or different conceptions of what constitutes harm in the first place.

This is a sensible way to think about AI risks because it makes it clear that it will be difficult to preemptively identify and address all potential issues and concerns in advance. This NIST AI Risk Framework, which builds on a previous multi-stakeholder effort on cybersecurity risk, is meant to help AI developers better understand how to identify and address various types of potential algorithmic risk. By contrast, the Biden AI Blueprint charges forward with a more panicked approach to addressing the supposed misery to come. And the rhetoric from FTC Chair Kahn and other agency heads suggests a lot more scrutiny and strong-arming is likely on the way as a result.

The White House meeting with AI execs this week also foreshadows the rise of jawboning as a indirect regulatory strategy for algorithmic issues, whereby policymakers will seek to extract concessions from innovators and do an end-run around normal rule-making procedures and constitutional constraints. Jawboning was a popular strategy employed in the past for analog era information platforms, especially radio and television broadcasters. For many decades, Federal Communications Commission (FCC) regulators used the agency’s open-ended “public interest” authority to steer or curb the marketplace choices made by media companies. This process became so common that it came to be known in the industry as “regulation by raised eyebrow,” or “regulatory threats that cajole industry members into slight modifications” of their programming, as one scholar defined it.

We should expect that jawboning and indirect regulation-by-raised eyebrow becomes a regular part of the AI policy landscape — and that it will be used by both parties to suit their priorities. Today it’s the White House making general pleas for “voluntary actions” or concessions; tomorrow it’ll be every member of Congress making demands for their pet peeves to be addressed — safety, security, “disinformation,” etc. No formal laws will be passed in the process; it’ll all just be regulation-by-intimidation. “You’ve got a real interesting algorithm there… It’d be a shame if anything happened to it.” That’s the implicit logic of how regulation-by-intimidation works.

People have already forgotten about the last White House summit on algorithmic control: President Trump’s 2019 “social media summit,” in which he tried to use the power of the bully pulpit to intimidate private social media platforms. Many members of Congress in both parties play this same game at every hearing, where they have a heated airing of grievances and make loud threats to each tech exec that is hauled before the committees. We should expect a lot more of this sort of show-trial behavior for AI innovators in coming months and years in the name of addressing algorithmic fairness and discrimination, even as Democrats and Republicans define those terms in diametrically opposed ways.

Meanwhile, I expect Khan’s FTC will take full advantage of the enormously broad authority the agency possesses to address anything she decides is algorithmically “unfair and deceptive.” And the rhetoric she’s been using lately, along with the aggressive actions of her agency toward tech companies during her tenure, suggests that she will try to browbeat AI innovators into submission using extreme tactics — such as reopening old cases to re-litigate long-settled matters (which just happened yesterday with an old Facebook case, in fact). At the same time, along with others in the Biden administration and Congress, Kahn will continue to seek to turn public opinion against strongly against any sort of continued algorithmic freedom to innovate without developers first seeking prior blessing from bureaucrats at every step in the process. AI innovators will essentially be treated as guilty until proven innocent, as her New York Times column hinted.

Was the Digital Revolution Worth It?

Indeed, a broader problem with what is happening in the Biden administration today on this front is how the tone of their recent statements about AI is rooted in deeper disdain for America’s digital innovators and the modern data-driven economy more generally. This has ramifications for the sort of innovation culture that America will create for new data-driven and algorithmic sectors and technologies going forward.

Innovation culture refers to the “various social and political attitudes and pronouncements toward innovation, technology, and entrepreneurial activities that, taken together, influence the innovative capacity of a culture or nation.” Entrepreneurs and venture capitalists take their cues from policymaker attitudes and pronouncements, and then adjust their decisions accordingly. In a world in which innovators and investors can move around more easily than in the past, the tone policymakers adopt toward new sectors and innovations matters deeply.

Khan’s desire to ensure that “history doesn’t repeat itself” suggests she believes that the Digital Revolution was a big mistake. But it’s worth remembering what that history actually gave us. The digital economy came about thanks to a bold bipartisan vision for internet and e-commerce crafted twenty-five years ago by the Clinton administration and a Republican-led Congress. Through a series of wise policy decisions in the mid-1990s, President Clinton and congressional lawmakers chose to allow America’s information and communications technology sectors to break out of the innovation cage that had constrained entrepreneurialism and consumer choice during the earlier analog era, in which information and media providers were very heavily regulated. That greatly limited innovation, competition, and choice.

Thanks to those policy decisions, which Khan and others in the Biden administration now decry and want to reverse, America’s computing and digital technology sectors became “a growth powerhouse” that drove “remarkable gains, powering real economic growth and employment,” as Brookings scholars have summarized. The results were staggering. According to the Bureau of Economic Analysis, in 2021, “the U.S. digital economy accounted for $3.7 trillion of gross output, $2.41 trillion of value added (translating to 10.3 percent of U.S. gross domestic product, $1.24 trillion of compensation, and 8.0 million jobs.”

In the process of generating all that economic activity, U.S. tech companies became household names across the globe, attracting talented immigrants and massive venture capital investment to our shores. Almost half of the top 100 digital tech firms in the world with the most employees are US companies, and 18 of the world’s top 25 tech companies by market cap are US-based firms.

And then there’s the most important Digital Revolution success story of all: It helped us move from a world of information poverty to one of information abundance. Humanity has never enjoyed more communications and media options and now every one of us enjoys the ability to be a one-person printing press, publishing house, and broadcaster all wrapped into one. This represents a remarkable achievement, and America’s enlightened policy vision helped our innovators accomplish it.

Kahn and others would have us believe it all wasn’t worth it because there have been some problems along the way. This is true, but Americans — and the rest of the world, for that matter — would have never enjoyed the many positive benefits of the Digital Revolution if the U.S. had simply imposed the old regulatory order on the internet and emerging digital technologies. It is easy to take all those benefits for granted today, but we know from history that citizens were denied information choices due to misguided bureaucratic policies and heavily politicized regulatory processes of the past. Were we really better off in an era of a rotary-dial phones, a few radio and TV stations, and a single local newspaper or library (assuming you had one at all)? The only reason we were able to move beyond that state of affairs is because policymakers made a bold decision to give technological freedom of choice a fighting chance.

Officials like Khan and others now insist that only through preemptive regulatory steps can we head off problems associated with new algorithmic systems. Basically, the Biden administration wants America to adopt the European model of digital technology regulation, and Kahn’s FTC is already positioning itself as a de facto Federal AI Commission, looking to assert broad-based precautionary principle-based control over the entire computational economy. Of course, that regulatory regime has been an unmitigated disaster for Europe and left the continent devoid of major players in the global digital tech arena. Many of the best inventors and biggest investors bolted Europe for the U.S. or other countries long ago to gain the freedom to develop exciting new products. That’d be a horrible model for America to adopt for algorithmic systems.

What Next?

So, what happens next? I’d guess that Kahn and other agencies will go on the offensive in a major way in coming months and concoct broad theories of harm in an effort to unilaterally and preemptively regulate algorithmic innovations. For some thoughts about what’s to come, read these recent essays from FTC veterans Alden Abbott (“Four Horsemen of the Bureaucratic Apocalypse Come for AI”) and Daniel J. Gilman (“Artificial Intelligence Meets Organic Folly”).

It’s unclear what happens with the “public evaluation of AI systems” that the White House today got major AI companies to agree to do on an evaluation platform developed by Scale AI at the AI Village at DEFCON 31 in August, but Politico predicts “An AI reckoning coming in August.” I suspect that “reckoning” will be an intensified political backlash once the evaluation basically tells us nothing more than we already know about current generative AI systems. And then the politicians will say, “Well, what else do you have for us?” And by that time the Department of Commerce will be wrapping up its “AI accountability,” proceeding and suggest they’d created a record justifying full-blown mandatory algorithmic audits for developers — likely enforced through some sort of FTC process. It’s really just a question of the nature and extent of the mandates involved under the guise of “algorithmic transparency” or “explainability.”

There are also other algorithmic regulatory angles in play that involve other issues such as antitrust, privacy, kids’ safety, and anti-discrimination policy. Khan has been revealed to be actively conspiring with European Union officials to regulate American tech companies under foreign competition law, but now she will likely also look to work with European officials to force more even more extraterritorial regulation on American firms through the E.U.’s 2018 General Data Protection Regulation and the E.U.’s new AI Act, which envisions aggressive global controls for algorithmic systems. Online safety concerns are also driving a growing regulatory push at the federal and state level, and policymakers are increasingly considering how to attack that issue using algorithmic explanability as a rationale to evade First Amendment scrutiny. Finally, more broad-based concerns about “algorithmic fairness” and discrimination concerns are already driving policy activity in the realm of employment law, with regulatory efforts there advancing at the federal, state and local level.

To be clear, policymakers should continue to study AI risks, but the process shouldn’t be driven by a heavy-handed, innovation-limiting federal regulatory regime based entirely on fear-based thinking and worst-case Chicken Little scenarios. A culture of safety-by-design for AI is sensible. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society. While some safeguards will be needed to minimize certain AI risks — and many laws and regulations do already exist to address problems that do develop — a flexible, bottom-up governance approach can address algorithmic concerns without creating overbearing, top-down mandates, which would hinder life-enriching and even life-saving AI innovations.

The ramifications of these policy choices are also significant because AI and algorithmic systems will play a crucial role in America’s global competitive advantage and relative geopolitical power in coming years. With China becoming a major competitor in advanced information technology sectors, and other nations racing to be at the forefront of the unfolding computational revolution, the United States must create a positive innovation culture if it hopes to prosper economically and ensure a safer, more secure, and continuously vibrant technological base.

___________

Additional reading:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer