Here Come the Code Cops: Senate Hearing Opens Door to FDA for Algorithms & AI Occupational Licensing

Adam Thierer
8 min readMay 16, 2023

The code cops are coming for artificial intelligence (AI) and proposing a massive new national and international information control system for digital technologies.

The U.S. Senate Judiciary Committee held a hearing this morning on, “Oversight of A.I.: Rules for Artificial Intelligence,” in which members and witnesses sketched out a blueprint for a comprehensive regulatory regime for AI. Some of the potential elements of this new regime that were floated at today’s hearing include:

· A new federal regulatory bureaucracy akin to a “Food & Drug Administration for algorithms;”

· A federal operating license for just about anything that lawmakers consider “dangerous,” enforced by the new agency;

· “Nutritional labels” for AI, or some sort of mandated disclosure statement about how every algorithm works;

· Mandatory algorithmic audits (conducted by third parties or agencies) after product release;

· A move to harmonize US law with European Union AI rules or other global AI regulatory regimes;

· An international regulatory body of some sort to oversee global AI development, perhaps through the United Nations;

· Other amorphous prior restraints and limitations on the use of various algorithmic innovations.

Taken together, this represents the beginning of a “Mother, May I?” permission slip-based regulatory regime for computational technology that will slam the brakes on AI innovation and simultaneously open the door to a digital information control regime. It will decimate America’s innovation culture and put life-enriching innovation at risk by treating AI developers as guilty until proven innocent under a legal standard of “unlawfulness by default” for algorithmic applications.

It is essential that the United States be a leader in AI to ensure our continued global competitive standing and geopolitical security. We must avoid overly burdensome technology regulations that can undermine AI’s benefits to the public and the nation as a whole because there is a compelling public interest in ensuring that algorithmic innovations are developed and made widely available to society.

Yet, everything about today’s Senate AI hearing suggests that America is ready to ignore all that and head in precisely the opposite direction with a costly, convoluted, top-down regulatory regime for one of the most important technologies of our lifetime.

The hearing included references to the atom bomb and various dystopian scenarios, featuring an endless array of worst-case claims about job loss, consumer manipulation, election fraud, and other “disinformation” issues. At one point, Sen. John Kennedy suggested that these discussions should begin with the assumption that AI wants to kill us. The many potential benefits associated with AI, machine learning, and robotics were given only passing consideration by a couple of a members.

Thus, it is clear that “AI [is] in Washington’s crosshairs” now and a lot more regulatory intervention is on the way. The Biden administration is simultaneously moving to assert greater control over AI and algorithmic systems through a variety of agencies and efforts. Importantly, today’s session happened against the backdrop of a much broader congressional effort to demonize the internet, social media, and digital technologies in general. Those attacks have softened the ground for a strike on AI and algorithms, which could constitute a backdoor way for Congress to regulate tech companies and digital platforms while trying to avoid First Amendment scrutiny. The clear implication of all this is that basically every member of Congress thinks the internet sucks and that, if we had to do it all over again, we should have just imposed the old broadcast media regulatory model on digital tech, complete with licenses and censorship of speech.

At today’s hearing, Sen. Chris Coons specifically drew the linkage between AI and social media, arguing that Congress, “cannot afford to be as late to regulating generative AI” as with earlier digital technologies and platforms. He and other Senators implied that the days of internet freedom were numbered and that AI innovators would be brought to heel next.

Sadly, the witnesses generally went along with most of this, and no one bothered seriously standing up for the freedom to innovate — or document the amazing success that the U.S. has enjoyed in the digital economy. The New York Times noted that, while most tech congressional hearings “can best be described as antagonistic” — an understatement, to say the least — today’s AI hearing was practically a lovefest. That’s primarily because the witnesses, and especially OpenAI CEO Sam Altman, were quick to jump on the regulatory bandwagon. Indeed, in over 30 years of covering tech policy in Washington, I cannot recall seeing any technology executive throw his entire industry under the bus quite as fast as Sam Altman did at today’s AI hearing. He sketched out a plan for wide-reaching AI control that would decimate algorithmic innovation and digital competition. Unsurprisingly, Senators were falling all over themselves to praise him for his capitulation.

Sen. Dick Durbin referred to the session as “historic” in terms of the way tech firms were coming in and begging for regulation. Durbin said firms were telling him and other members, “Stop me before I innovate again!” which gave him great joy, and the only thing that mattered now he said is “how we are going to achieve this.” He then also endorsed both a national and a global regulatory authority for AI, and plenty of other regulations.

That’s a horrifying thing to be happy about, yet it represents the current state of thinking in Congress about digital technology matters. Technological stagnation is now welcomed or recommended by our national leaders.

Sen. Richard Blumenthal, who chairs the committee, noted that this would be the first of many coming hearings on AI policy issues, and said he looked forward to exploring algorithmic regulation on many other fronts. Sen. Josh Hawley, the Republican minority chair on the Judiciary Committee, piled on and suggested that the best course of action was to open the floodgates of litigation and invite trial lawyers to file lots more lawsuits to intimidate and hobble AI developers. “The threat of litigation is a powerful tool,” he smiled.

One witness, Christina Montgomery of IBM, deserves some credit for pushing back a bit against some of this insanity. She highlighted how a more targeted, risk-based approach to AI governance is already developing, and that AI innovators have taken steps to bake-in best practices by design. [Here’s a 20,000-word study on “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence” in which I document all that activity as well as all the state capacity that already exists to address AI risks.] But the overall thrust of today’s session was too caught up in techno-panicky talk and extremist proposals to be concerned with such facts and more reasoned approaches to AI governance.

Perhaps the only bright spot during the hearing was the fact that several lawmakers and witnesses understood that regulatory capture, which occurs when special interests co-opt policymakers or political bodies to further their own ends, could become a real problem with a new AI regulatory body. Sen. Blumenthal was even willing to concede that there was a real “monopolization danger” associated with new agency regulations that could exclude new competition.

Of course, those aren’t the only downsides of a hypothetical “FDA for algorithms.” Lawmakers could have also identified the deleterious impact of preemptive, precautionary regulation on consumer choice and the creation of life-enriching goods and services. Over-regulation of AI will also have profound ramifications for America’s geopolitical competitiveness and security. China and other nations are hoping not to get left in the dust by America again in the next great technological revolution. It would be shocking if America shot itself in the foot as that race got underway.

If today’s hearing was any indication, however, that appears to be exactly where America is heading. Even worse than the idea of an FDA for algorithms is the proposed global regulatory authority to somehow monitor global AI innovations, possibly through the United Nations. A quick reminder: the UN recently allowed North Korea (the world’s biggest nuclear pariah state) to take over as head of the UN Conference on Disarmament, and then let Russia take the lead on the UN Security Council after they invaded Ukraine. So, how’s this new global regulatory body for AI going to play out once countries like those plus China, Iran, and others start making demands for “harmonizing” global algorithmic speech and commerce standards?

This is a truly terrible, destructive model for AI that must be rejected before we commit technological suicide as a nation and squander the amazing benefits that will accompany the Computational Revolution that awaits us. We have somehow forgotten that the greatest of all AI risks is shutting down AI altogether.

Additional Reading:



Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance.