What the Supreme Court Decisions This Week Mean for AI Policy

Adam Thierer
5 min readJun 28, 2024

--

Here’s a somewhat counter-intuitive and extremely cynical take on what this week’s big Supreme Court decisions in Loper Bright and Murthy v. Missouri mean for emerging tech law and regulation, and AI policy in particular.

Combine the fall of Chevron deference (via Loper) and the decision in the Murthy case earlier this week (greenlighting continued jawboning by public officials) and what you likely get for tech policymaking, and AI policy in particular, is an even more aggressive pivot by federal regulatory agencies towards the use of highly informal “soft law” governance techniques. The game now is played with mechanisms like guidances, recommended best practices, agency “enforcement discretion” notices, public-private workshops and other “collaborations,” “voluntary concessions,” multistakeholder working groups, and a whole hell of a lot more jawboning. The use of these mechanisms will accelerate from here thanks to these two Supreme Court decisions.

There is a lot of wishful thinking by some that the fall of the Chevron doctrine means that Congress will automatically (1) reassert its rightful Constitutional role as the primary lawmaker under Article I, (2) stop delegating so much authority to the administrative state, and (3) engage in more meaningful oversight of regulatory agencies. I wish! But I have to ask: Have you seen the sorry state of Congress lately — especially on tech policy?

Seriously, let’s get real for a moment. When was the last time that dysfunctional mess of an entity called the U.S. Congress was able to advance ANY serious technology policy measures or exercise ANY meaningful agency oversight? It’s just almost impossible now for serious tech legislating or oversight to get done at the federal level, especially with the relentless reality of the “pacing problem” haunting those efforts (meaning technology continues to evolve faster than the ability of the Legislative Branch to keep up). Waves of tech policy issues just keep cresting and crashing down on lawmakers’ heads faster and faster, thus crowding out one day’s tech policy concern with another concern the next day, and another the next week, and so on and so on.

Thus, in the wake of Loper and Murthy, soft law and “kludgeocracy” — i.e, cobbling together policy quick fixes through messy, informal means — will be the new normal at the federal level for major emerging tech policy matters like AI policy. This is already the way things largely work today for AI policy inside major federal agencies like FDA, NHTSA, and others. And the Biden AI Executive Order just eggs on agencies in this regard, encouraging them to think expansively about how to exercise their powers over algorithmic systems even in the absence of any clear Congressional authorization to do so. And Congress is barely paying attention to any of that growing agency activity around AI.

Soft law sometimes yields some good results when agencies don’t go overboard and make a good-faith effort to find flexible governance approaches that change to meet pressing needs while Congress remains silent. In fact, I’ve offered positive example of that in recent law review articles and essays. But I’ve also noted how this system can also be easily abused without proper limits and safeguards. It is particularly concerning when free speech issues are in play and bureaucrats are looking to influence speech indirectly to avoid First Amendment scrutiny. Again, the Murthy decision just makes this threat an even bigger problem now, especially with so many agencies looking to encourage “algorithmic fairness” in AI systems, whatever that means.

The courts could perhaps come back later and try to check some of this over-zealous agency activity, but that would only happen many years later when no one really cares much anymore. The more realistic scenario, however, is that agencies just get better and better at this and avoid court scrutiny altogether. No longer will any AI-related agency policy effort contain the words “shall” or “must.” Instead, the new language of tech policymaking will be “should consider” and “might want to.” And sometimes it won’t even be written down! It’ll all just arrive in the form of speech by an agency administrator, commissioner, or via some agency workshop or working group. In the old days, we used to call this “regulation by raised eyebrow” when FCC officials would tell broadcasters what they thought should not be shown on prime-time TV. Usually the message got delivered by a Commissioner speaking at an industry event, or by a Chairman having a private meeting with a TV executive.

Is all this increasingly informal policy activity really constitutionally permissible (or even APA compliant)? Who knows; it all depends on context. But most affected parties will not challenge any of it. They’ll instead look to go with the flow and make a rough peace with regulators to just make their problems go away so they can move on and get back to making products. Congress won’t hold many hearings about it, and it’s unlikely cases get brought challenging any of it by any other downstream parties because they will lack clear standing — and we saw how big of a deal that standing question was in the Murthy decision.

So, in closing, while I am happy that Chevron fell, and while I very much want Congress to step and do their jobs — both in terms of making clear laws and exercising meaningful oversight of the administrative state — I just do not believe any of that is going to happen. Meanwhile, public choice theory teaches us that bureaucrats are self-interested actors who will respond to incentives and always look for creative ways to retain and extend their authority. The strange combined incentive of the Loper and Murthy decisions is that it encourages even more adventurous machinations by regulators through the expanded use of off-the-books techniques that won’t land their butts in courts next time around at all.

This is the future of AI policy and emerging technology regulation in the U.S. Get used to it.

Additional Reading:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer