AI Concessions and Commitments in the Name of Democratic Accountability
There is a certain irony regarding how the crowd saying we need to “make artificial intelligence more democratic” appear to be the same people essentially tossing the actual democratic process straight out the window for AI policymaking. In the executive and legislative branches, we see strong-arming tactics and “voluntary concession / commitment” shakedowns becoming standard operating procedure for emerging tech policy.
Deputy National Security Advisor Anne Neuberger recently said that Biden Administration’s forthcoming AI executive order “is incredibly comprehensive,” and called it, “a bridge to regulation because it pushes the boundaries and is only within the boundaries of what’s permissible…by law.” Whether this latest AI executive order will be “a bridge to regulation” or actual regulation is a matter of semantics in a world where the White House has pushed “voluntary commitments” on leading AI developers in recent months. There is nothing all that voluntary about the process, of course. When the President of the United States calls major tech CEOs to the White House and leans on them to make concessions or promises on an issue, they tend to fall in line because they know that there will be serious consequences for them if they don’t. Meanwhile, as I’ve documented here before, many agencies are gearing up to use their amorphous statutory authority to gradually become indirect AI super-regulators. They will do so by borrowing the White House shakedown tactics to demand “algorithmic fairness,” however they interpret it, to address their pet issues.
This process of concessions, extractions, and other executive branch actions and White House directives will largely fill the void left by legislative branch dysfunctionalism, especially as another government shutdown looms. At the same time, however, Senate Majority Leader Chuck Schumer will continue to use his “AI Insight Forums” to work around congressional committee heads and the formal hearing process to strong-arm concessions from innovators, even as a government shutdown is in effect.
It is certainly possible that some sensible AI best practices may emerge from all this jawbowning and ongoing strong-arming efforts by federal officials. But such efforts have many downsides due to the arbitrary, extra-constitutional nature of it all. The possibility of abuse of authority exists whenever policy is made behind closed doors or without the typical procedural safeguards that accompany more formal governance efforts.
For example, how do you feel about the White House using this process to force cloud companies to disclose their AI customers to the government? Because that’s probably about to happen due to White House pressure, perhaps through the forthcoming new AI executive order. Or how about the White House using the power of the bully pulpit to try to influence algorithmic content decisions on social media platforms? Oh, wait a minute, that process already got underway with Donald Trump’s White House “Social Media Summit” back in 2019! And do I need to tell you about all the pressure that tech companies came under in the early days of the Biden administration to censor certain types of COVID-related communications? “Disinformation” and “misinformation” have become a convenient excuse for both parties to jawbone and threaten private tech companies to fall in line with their policy priors. In the age of AI, we’ll see this process get super-charged and enforced indirectly without a single law ever passing.
Whatever one thinks about the entire “concessions and commitments” approach to technology governance, one thing should be abundantly clear: Policymakers who use it, or pundits who endorse it, cannot sell it to the citizenry as “democratic accountability for AI” when they are completely bypassing the actual democratic process of how laws are supposed to get made in this nation. Personally, I’d have a lot more respect for them if they would just be honest about the fact that they want to use these bullying tactics because they believe that the ends justify the means — and because they know those tactics often works where most other governance mechanisms fail. What these folks realize is that the sword of Damocles need not fall to be effective; it need only hang in the room just above the necks of those you want to influence. The implicit threat at each AI governance session today can be summarized thusly: “You’ve got a real nice algorithm there. It’d be a shame if anything happened to it.”
In the age of extreme legislative dysfunctionalism, extreme partisan rancor, and endless grievance politics, we can expect to see such mafioso tactics become the norm for most emerging tech policy matters, and especially for AI policy. But remember, kids… it’s all in the name of “making AI more democratic.”
Additional Reading:
- EVENT: Debating Frontier AI Regulation, Brookings, September 14, 2023. (My remarks begin at 51-minute mark of the video).
- Adam Thierer, “Blumenthal-Hawley AI Regulatory Framework Escalates the War on Computation,” Medium, September 13, 2023.
- Adam Thierer, Statement for the Record, Hearing on “The Need for Transparency in Artificial Intelligence,” September 12, 2023.
- Adam Thierer, “Will AI Policy Became a War on Open Source Following Meta’s Launch of LLaMA 2?” Medium, July 19, 2023.
- Adam Thierer, “The FTC Looks to Become the Federal AI Commission,” Medium, July 15, 2023.
- Adam Thierer, “Is Telecom Licensing a Good Model for Artificial Intelligence?” Medium, July 8, 2023.
- SLIDES: “AI Worldviews: Similarities & Differences” (July 2023).
- Adam Thierer, “The Schumer AI Framework and the Future of Emerging Tech Policymaking,” R Street Institute Real Solutions, June 27, 2023.
- Adam Thierer, “Is AI Really an Unregulated Wild West?” Technology Liberation Front, June 22, 2023.
- Adam Thierer, “The Most Important Principle for AI Regulation,” R Street Institute Real Solutions, June 21, 2023.
- INTERVIEW: “5 Quick Questions for AI policy analyst Adam Thierer,” interview for the Faster Please! newsletter with James Pethokoukis, June 12, 2024.
- PODCAST: “Who’s Afraid of Artificial Intelligence?” Tech Freedom TechPolicyPodcast, June 12, 2023.
- Adam Thierer, “Existential Risks & Global Governance Issues around AI & Robotics,” R Street Institute Policy Study №291 (June 2023).
- FILING: Comments of Adam Thierer, R Street Institute to the National Telecommunications and Information Administration (NTIA) on “AI Accountability Policy,” June 12, 2023.
- Neil Chilson & Adam Thierer, “The Problem with AI Licensing & an ‘FDA for Algorithms,’” Federalist Society Blog, June 5, 2023.
- Adam Thierer, “The Many Ways Government Already Regulates Artificial Intelligence,” Medium, June 2, 2023.
- Adam Thierer, “Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control,” Medium, May 29, 2023.
- PODCAST: Neil Chilson & Adam Thierer, “The Future of AI Regulation: Examining Risks and Rewards,” Federalist Society Regulatory Transparency Project podcast, May 26, 2023.
- Adam Thierer, “Here Come the Code Cops: Senate Hearing Opens Door to FDA for Algorithms & AI Occupational Licensing,” Medium, May16, 2023.
- Adam Thierer, “What OpenAI’s Sam Altman Should Say at the Senate AI Hearing,” R Street Institute Blog, May 15, 2023.
- PODCAST: “Should we regulate AI?” Adam Thierer and Matthew Lesh discuss artificial intelligence policy on the Institute for Economic Affairs podcast, May 6, 2023.
- Adam Thierer, “The Biden Administration’s Plan to Regulate AI without Waiting for Congress,” Medium, May 4, 2023.
- Adam Thierer, “NEPA for Al? The Problem with Mandating Algorithmic Audits & Impact Assessments,” Medium, April 23, 2023.
- Adam Thierer, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence,” R Street Institute Policy Study №283 (April 2023).
- Adam Thierer, “A balanced AI governance vision for America,” The Hill, April 16, 2023.