Blumenthal-Hawley AI Regulatory Framework Escalates the War on Computation

Adam Thierer
11 min readSep 14, 2023

--

It seems that each week now brings more radical ideas for regulating artificial intelligence (AI) as America inches closer to an all-out government war on computation and automation. Four AI hearings are taking place in Washington this week alone, although one of them is a closed-door session that Senate Majority Leader Chuck Schumer orchestrated today involving top tech CEOs and Washington insiders to kick off his new series of “AI Insight Forums.” But there were two simultaneous AI hearings in the Senate Commerce and Judiciary committees on Tuesday afternoon, with policymakers jockeying for turf and media attention in the escalating AI policy wars.

At these hearings and political events, there is always some lip-service given to the benefits of AI and machine learning, but those niceties usually get pushed aside quickly as policymakers pivot to the typical Chicken Little playbook and trot our a parade of hypothetical horribles about AI pulled right from the pages of dystopian sci-fi movie plots. This impulse is now culminating in concrete regulatory proposals for sweeping governmental controls on AI and computation.

At yesterday’s Judiciary Committee hearing, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), ranking members of the Judiciary subcommittee on Privacy, Technology, & the Law, announced a new legislative framework for AI policy. Their effort would take America 50 years backwards by essentially recreating Ma Bell era regulatory controls for modern informational and computing technologies. Their proposal includes a new federal AI agency, top-down licensing, and sweeping auditing and transparency mandates.

Just in case any AI innovation actually survives this regulatory onslaught of new bureaucratic micromanagement, the Blumenthal-Hawley approach also proposes expanded liability and new private rights of action after the fact, and their framework would also deny AI innovators any Section 230 protections. These proposals would open the litigation floodgates and would allow trial lawyers to harass algorithmic entrepreneurs with endless waves of frivolous lawsuits.

Vetocracy Comes to the World of AI

The Blumenthal-Hawley proposal has many other regulatory components that are troubling. For example, their new AI oversight body would also be given the power to “monitor and report on technological developments and economic impacts of AI, such as effects on employment.” This is an open invitation to let special interests and policymakers protest any workplace automation they disfavor. Consider how Hawley used yesterday’s hearing to lament the market-oriented economic policies of the past few decades and he then used that as a launching point for a diatribe against technological automation more generally. Hawley said he didn’t want AI to be an accelerant for any job losses and suggested that he wouldn’t mind seeing government take steps to stop automation technologies from being used for even routine tasks, like fast-food ordering at drive-through restaurants. Hawley basically wants government to stop the clock on progress toward a more productive economy. Time to shut down bank ATMs and give those jobs back to real people! And lock up Siri and Alexa while we’re at it!

This sort of anti-automation thinking could eventually devolve into pure legislative Ludditism and we could get more proposals along the lines of what former New York City mayor Bill de Blasio proposed a few years back with his Federal Automation and Worker Protection Agency to “oversee automation and safeguard jobs and communities.” Or we’ll get the sort of “robot taxes” that de Blasio and Bernie Sanders have endorsed. The Blumenthal-Hawley plan opens the door to this sort of mischief and could create many new political veto points in technological design and diffusion process.

The Blumenthal-Hawley contains other amorphous ideas that sound innocuous on paper (“safety brakes,” limits on kids’ use of generative AI, and public databases of AI model information) that would lead to a host of unintended and quite deleterious consequences once translated into regulatory edicts. Some witnesses yesterday encouraged Blumenthal and Hawley to go even further with top-down mandates. Boston University law professor Woodrow Hartzog called for sweeping “full-measure” regulations that would have Blumenthal and Hawley’s AI agency preemptively dictate how systems are designed such that federal bureaucrats would be micro-managing AI design decisions up front.

That is a terrible idea and, as I noted in a recent essay, we need precisely the opposite principle to guide AI policy: focus on algorithmic outputs/outcomes, not on system inputs or design. “A governance regime focused on outcomes and performance treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm and tailored, context-specific solutions to it,” I argued. This principle is the key to balancing AI innovation and safety.

By contrast, putting bureaucrats in charge of controlling all systems inputs is a recipe for technological stagnation and endless politicization as it would introduce endless veto points in the algorithmic design process and basically turn us into Europe, where innovation goes to die under a mountain of similar precautionary principle-based mandates.

Undermining US Sovereignty and Security

Put simply, the Blumenthal-Hawley AI blueprint endorses comprehensive computational control by government. Their regulatory framework favors compliance above competition, litigation above innovation, and economic micromanagement over market dynamism. This represents the recentralization of information and communications technology and a return to the sort of control culture that America started moving away from several decades ago. It would also give China and other nations a chance to potentially advance their AI capabilities at a time when America should be kicking into the next gear.

While Blumenthal and Hawley plot ways to limit America’s computational capabilities, China and the rest of the world aren’t sitting still. On the list of the 500 most powerful commercial computer systems in the world, “China continues to rise and now sits at 227, up from 219 six months ago. Meanwhile, the share of U.S.-based system remains near its all-time low at 118.” Russia also just recently announced major expansions in their AI supercomputing capabilities. The UAE government just launched an expanded version of its Falcon 180B open source LLM model, which is 2.5 times larger than Meta’s LLaMA-2, which came out just two months ago but has already been dethroned as the most powerful open source AI model in the world.

Source: https://www.top500.org/statistics/overtime/

While it is certainly possible for U.S. policymakers to tie the hands of our domestic AI developers, there is no way for them to control what happens in China, Russia, the UAE or most other nations. If America gives up, they will race ahead, and no international “AI arms control” treaty is going to bind them in a meaningful way. That is wishful (and dangerous) thinking of the highest order and it would leave America less safe in the long-run. That is why the debate over computational control has important ramifications for America’s sovereignty and security.

If we hope to prosper economically and ensure a safer, more secure technological base that will ensure the nation is prepared for the computational revolution, approaches like Blumenthal and Hawley’s innovation-killing AI blueprint will need to be rejected in favor of more moderate approaches.

Other Frameworks & Existing Regulatory Capacity

Luckily, there are better policy frameworks out there. Although it did not make much news, Senator Bill Cassidy (R-LA), ranking member of the Senate Health, Education, Labor, and Pensions Committee, just released a new white paper on a possible AI policy vision for Congress. It offers a more thoughtful and balanced analysis of workforce, education & health care issues surrounding AI. “A sweeping, one-size-fits-all approach for regulating AI will not work and will stifle, not foster, innovation,” he rightly notes. “Top-down, all-encompassing frameworks risk entrenching incumbent companies as the perpetual leaders in AI, imposing an artificial lid on the types of problems that dynamic innovators of the future could use AI to solve,” Cassidy said. “Instead, we need robust, flexible frameworks that protect against mission-critical risks and create pathways for new innovation to reach consumers.”

That’s a better approach to start with because it doesn’t begin with the presumption of AI innovators being guilty until proven innocent. Cassidy’s approach also takes into account the extensive array of existing regulatory laws and agencies already out there in our massive federal government, with its 2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Many of these agencies are already extremely active regulating algorithmic systems or considering new rules for AI in their areas. Before we just add more and more layers of regulation and bureaucracy to the mix, we ought to give all that existing regulatory capacity a chance. It could be that, in some important ways, we are actually over-regulating AI currently (as with drones and AI-enabled medical devices).

Meanwhile, in late June, Rep. Ted W. Lieu (D-CA), Congressman Ken Buck (R-CO), and Congresswoman Anna Eshoo (D-CA) introduced the National AI Commission Act, a bipartisan effort to create a national commission to focus on the question of regulating AI. And in late July, members of the Congressional Artificial Intelligence Caucus introduced the Creating Resources for Every American To Experiment with Artificial Intelligence Act of 2023 (CREATE AI Act), which would fund the National Artificial Intelligence Research Resource (NAIRR) as a shared national research infrastructure to provide AI researchers and students “with greater access to the complex resources, data, and tools needed to develop safe and trustworthy artificial intelligence.”

These are more sensible places to begin debating AI policy compared with the “ready, fire, aim” approach envisioned in the Blumenthal-Hawley effort. To read more about other balanced approaches to AI policy, examine the frameworks I discussed in my big April white paper on “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.” Also check out the final report of the Chamber of Commerce AI Commission, which I served on. Finally, this brand new AI policy framework from the Consumer Technology Association is also quite good. There are many other reasonable frameworks and policy ideas out there.

Will We Be a Nation of Takers or Makers?

In closing, let’s think about what sort of message the Blumenthal-Hawley AI policy framework sends to potential future citizens and technology innovators who are thinking about working or investing in AI, autonomous systems, robotics, quantum science, or other related computational and data science fields.

Walter Isaacson has a new biography out about Elon Musk that focuses on a crucial attribute of entrepreneurial spirit that makes some individuals, organizations, and even nations succeed where others fail: A willingness to learning through trial and error. In a recent podcast conversation with Lex Fridman, Isaacson argued that, “I think we have more referees than we have risk takers. More lawyers and regulators and others saying ‘you can’t do that, that’s too risky’ than people willing to innovate.”

Isaacson is pinpointing a serious problem that J. Storrs Hall identified in his amazing recent book, Where Is My Flying Car? Hall explained how many brilliant young people in the U.S. today are opting for degrees in “critical studies,” law, and other soft sciences instead of going into engineering fields or similar “builder” sectors. Why? Because our nation too often stacks the deck against the latter by placing regulatory roadblocks or threatening liability in front of many makers, especially in fields like transportation and energy. To change that, Hall argues that we need to push for “a world of makers instead of takers” if we want to prosper as a society. And that requires getting policy right by insuring public policy is not discouraging creativity, entrepreneurialism, and risk-taking.

Consider, then, the sort of signal sent by the Blumenthal-Hawley bill and the escalating government war on computation more generally. The message is pretty clear: If you go into this field, you could get punished for it in the long-run. For all the talk for pushing students to get more serious about STEM studies and to go out and change the world with science and engineering, our emerging policies for AI and computation send the exact opposite message: Stay away! You’ll likely get sued or regulated for your creative pursuits and efforts to improve society.

The Blumenthal-Hawley AI policy framework is one of the most dangerous regulatory proposals I’ve seen in over 30 years of covering emerging technology policy. If America walks down this path, we’ll be committing high-tech suicide by decimating our national technological base as we gradually move away from being a nation of AI makers.

Additional Reading:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer