AI Concessions and Commitments in the Name of Democratic Accountability

Adam Thierer
5 min readSep 28, 2023

--

There is a certain irony regarding how the crowd saying we need to “make artificial intelligence more democratic” appear to be the same people essentially tossing the actual democratic process straight out the window for AI policymaking. In the executive and legislative branches, we see strong-arming tactics and “voluntary concession / commitment” shakedowns becoming standard operating procedure for emerging tech policy.

Deputy National Security Advisor Anne Neuberger recently said that Biden Administration’s forthcoming AI executive order “is incredibly comprehensive,” and called it, “a bridge to regulation because it pushes the boundaries and is only within the boundaries of what’s permissible…by law.” Whether this latest AI executive order will be “a bridge to regulation” or actual regulation is a matter of semantics in a world where the White House has pushed “voluntary commitments” on leading AI developers in recent months. There is nothing all that voluntary about the process, of course. When the President of the United States calls major tech CEOs to the White House and leans on them to make concessions or promises on an issue, they tend to fall in line because they know that there will be serious consequences for them if they don’t. Meanwhile, as I’ve documented here before, many agencies are gearing up to use their amorphous statutory authority to gradually become indirect AI super-regulators. They will do so by borrowing the White House shakedown tactics to demand “algorithmic fairness,” however they interpret it, to address their pet issues.

This process of concessions, extractions, and other executive branch actions and White House directives will largely fill the void left by legislative branch dysfunctionalism, especially as another government shutdown looms. At the same time, however, Senate Majority Leader Chuck Schumer will continue to use his “AI Insight Forums” to work around congressional committee heads and the formal hearing process to strong-arm concessions from innovators, even as a government shutdown is in effect.

It is certainly possible that some sensible AI best practices may emerge from all this jawbowning and ongoing strong-arming efforts by federal officials. But such efforts have many downsides due to the arbitrary, extra-constitutional nature of it all. The possibility of abuse of authority exists whenever policy is made behind closed doors or without the typical procedural safeguards that accompany more formal governance efforts.

For example, how do you feel about the White House using this process to force cloud companies to disclose their AI customers to the government? Because that’s probably about to happen due to White House pressure, perhaps through the forthcoming new AI executive order. Or how about the White House using the power of the bully pulpit to try to influence algorithmic content decisions on social media platforms? Oh, wait a minute, that process already got underway with Donald Trump’s White House “Social Media Summit” back in 2019! And do I need to tell you about all the pressure that tech companies came under in the early days of the Biden administration to censor certain types of COVID-related communications? “Disinformation” and “misinformation” have become a convenient excuse for both parties to jawbone and threaten private tech companies to fall in line with their policy priors. In the age of AI, we’ll see this process get super-charged and enforced indirectly without a single law ever passing.

Whatever one thinks about the entire “concessions and commitments” approach to technology governance, one thing should be abundantly clear: Policymakers who use it, or pundits who endorse it, cannot sell it to the citizenry as “democratic accountability for AI” when they are completely bypassing the actual democratic process of how laws are supposed to get made in this nation. Personally, I’d have a lot more respect for them if they would just be honest about the fact that they want to use these bullying tactics because they believe that the ends justify the means — and because they know those tactics often works where most other governance mechanisms fail. What these folks realize is that the sword of Damocles need not fall to be effective; it need only hang in the room just above the necks of those you want to influence. The implicit threat at each AI governance session today can be summarized thusly: “You’ve got a real nice algorithm there. It’d be a shame if anything happened to it.”

In the age of extreme legislative dysfunctionalism, extreme partisan rancor, and endless grievance politics, we can expect to see such mafioso tactics become the norm for most emerging tech policy matters, and especially for AI policy. But remember, kids… it’s all in the name of “making AI more democratic.”

Additional Reading:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer