The Battle over AI Regulation Will End in a Big Fight over Transparency & Audits

Adam Thierer
9 min readApr 6, 2024

--

For those of us who are fighting for the freedom to innovate with artificial intelligence (AI) and pushing back against the growing “war on computation,” it has been a rough couple of years. The sheer number of radical regulatory proposals has proliferated faster than we could have ever imagined, leaving us scrambling to fend off an endless firehose of kooky ideas (pauses and bans, big new general-purpose agencies or international control systems, new licensing regimes, surveillance and tracking regimes, etc).

That being said, I have always been confident that we’ll be able to beat back the craziest proposals for regulating AI, but then we will eventually be left with a more difficult fight over what is meant by algorithmic transparency or “explainability” and whether those things can or should be mandated by law.

We’ve been confronted with transparency-based regulations in many previous contexts and they are often the hardest things for innovation defenders to push back against. Transparency always sounds great, but the devil is very much in the details. Done improperly, transparency requirements can have many unintended consequences, especially if such mandates arrive in the form of full-blown algorithmic audits.

A Broken Law Becomes a Model for AI Policy

I wrote about algorithmic transparency and explainability issues in detail in my study on, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence,” (mostly around pgs. 27–33) and I cite a lot of the most relevant academic literature on the topic there. I excerpted those sections and added some more context in my essay on “NEPA for Al? The Problem with Mandating Algorithmic Audits & Impact Assessments.”

NEPA stands for the National Environmental Policy Act, a 1969 law that requires formal environmental impact statements for major federal actions “significantly affecting the quality of the human environment.” Many state governments have their own versions of NEPA. The law was created with the best of intentions but is now widely seen as a major impediment to progress on several important fronts, including for many projects and programs that would actually have significant environmental benefits. Analysts have thoroughly documented the enormous paperwork costs and project delays associated with NEPA. I summarized some of those findings in my earlier report, noting how:

NEPA assessments were initially quite short (sometimes less than 10 pages), but, today, the average length of these statements exceeds 600 pages and can include appendices that push the total over 1,000 pages. Moreover, these assessments take an average of 4.5 years to complete; some have taken 17 years or longer. What this means in practice is that many important public projects are not completed, or they take much longer to complete at considerably higher expenditure than originally predicted. For example, NEPA has slowed many infrastructure projects and clean energy initiatives, and even Democratic presidential administrations have suggested the need to reform the assessment process due to its rising costs. [see study for sources]

Despite these problems, many tech policy scholars and policymakers are now calling for a NEPA-like model for algorithmic services, and various AI legislative and regulatory proposals are already being floated that would build on the NEPA framework. I summarized and critiqued those proposals in my filing to the National Telecommunications and Information Administration (NTIA) in the agency’s “AI Accountability Policy” proceeding last year. A major focus of that NTIA proceeding was how AI transparency / explainability might somehow be enforced though algorithmic impact assessments or audits.

That NTIA just wrapped up that proceeding and published a big report on March 27th. While the report is murky regarding how far the Administration can push such AI auditing mandates, the agency says on pg. 68 that, “We recommend that future federal AI policymaking not lean entirely on purely voluntary best practices. Rather, some AI accountability measures should be required, pegged to risk.” The agency continues on to say that “work needs to be done to implement regulatory requirements for audits in some situations,” and then outlines some ideas for doing so. This is in line with the Biden Administration’s ongoing push to encourage regulatory agencies to steadily expand their efforts to influence algorithmic innovation both directly and indirectly.

To understand what’s going on here, this new 77-page NTIA report must be read against the backdrop of President Biden’s earlier 100+ page AI executive order from 2023 and the Administration’s 73-page “AI Bill of Rights” from 2022. Taken together, I have noted many times over, these documents basically serve as a green light for regulatory agencies (especially the Federal Trade Commission) to expansively explore more controls for AI systems.

The Administration’s master plan on AI regulation is basically to just have agencies aggressively blaze their own trail and ignore Congress. The real wild card here is whether and how the Biden Administration will seek to convert the voluntary National Institute of Standards and Technology (NIST) risk management frameworks into more formal regulatory requirements, including AI audits or other ambiguous “accountability” requirements. Senator Mark Warner and others in Congress want to mandate that, and some state laws even encourage compliance with the NIST framework. But the Biden Admin isn’t waiting around for anyone to authorize anything; they’re just trying to do it all unilaterally.

The Coming “Army of AI Auditors” (and Endless Paperwork Hell)

It is worth noting that, before the new NTIA report launched, NTIA chief Alan Davidson called for “a system of AI auditing from the government,” and suggested the need for “an army of auditors” to get the job done. We now appear well on our way to getting that bureaucratic AI army of auditors and the ramifications of all that meddling could be quite profound if it undermines important innovations in AI and machine learning (ML).

What all this adds up to is a lot more compliance requirements and bureaucratic meddling — likely with a heavy dose of regular jawboning from regulators and other Administration officials — that will require algorithmic innovators to address any number of pet peeves people have before launching their AI/ML-enabled products. Much like the NEPA process, ‘vetocracy’ (veto checkpoints pushed by special interests and bureaucrats) and endless delay will become the new norm — and the enemy of progress. Everything will grind to a halt as innovators are forced to run the gauntlet of hearings, review boards, special interest pleadings, and most of all paperwork, paperwork, PAPERWORK! Again, just go take a hard look at the NEPA process in action to get a preview of what’s to come if all this gets mandated for AI in a top-down fashion.

I’ve made it clear in my writing that I am not necessarily opposed to AI audits or impact assessments so long as they are kept mostly in the realm of voluntary best practices driven by multistakeholder processes (like the NIST AI Risk Management Framework) and, most importantly, be very context-specific / sectoral-specific (instead of broad-brush general-purpose audits). Of course, that’s not going to be enough for the many regulatory advocates and government officials who want these things mandated in some fashion.

The NTIA’s new report starts pushing for audits and various types of algorithmic transparency but is somewhat vague on details. The document does float some ideas, however. “Government may also need to require other forms of information creation and distribution, including documentation and disclosure, in specific sectors and deployment contexts (beyond what it already does require),” the report concludes. This is basically a sketch for a federal AI auditing regime in all but name. And the report foreshadows the coming of Davidson’s “army of auditors” with amorphous recommendations about a national registry of disclosable AI system audits, international coordination on “alignment of inspection regimes,” and “pre-release review and certification for high-risk deployments and/or systems or models,” among other proposals.

We’d basically be importing the failure European model of regulation into America if the Biden Administration gets their way.

AI Audits Will be Demanded in Exchange for Federal Preemption

Meanwhile, some legislative proposals at both the federal and state level would take NIST’s voluntary AI RMF framework and give it enforcement teeth of some sort, including by making it the basis of ex ante or ex post impact assessments or audits (or both). The tech industry is very torn on these ideas, but many tech trade associations and major companies have made their peace with at least AI impact assessments, although they sometimes cagey about how what sort of mandates they can live with and who should enforce them.

Many algorithmic developers are rightly worried about a patchwork of state and local AI auditing or impact assessment requirements along the lines of what New Your City has already required for automated hiring tools. Many other states (most notably California) are toying with similar requirements. This growing patchwork of algorithmic transparency / explainability regulations will force more and more AI developers to come to Washington begging for preemption, something that I have also argued is very much needed. But preempting AI is going to be quite challenging because even defining “AI” is a contentious matter. One really needs to go at it on a case-by-case or sector-by-sector basis to get preemption done right.

Regardless, when any effort is made at the federal level to advance preemption language, it will open the door for other types of regulatory mischief to be bundled into it as the price of getting it over the finish line. Recall that in the debate over the American Data Protection and Privacy Act of 2022, the comprehensive federal privacy proposal that would preempt state privacy regulations, regulatory advocates managed to get language included that would require large data handlers to perform an annual algorithm impact assessment that includes a “detailed description” of both “the design process and methodologies of the covered algorithm,” as well as a “steps the large data holder has taken or will take to mitigate potential harms from the covered algorithm.”

A baseline federal privacy bill still has not passed, partially because AI policy has sucked all the oxygen out of the committee rooms previously considering the issue. But the effort to craft one continues, and that quid pro quo could become the template for what happens if Congress gets serious about preemptive state and local AI regulations. In other words, broad-based audits or impact assessments will become the price of getting such an AI preemption bill done. A lot of the largest tech companies and trade associations will be willing to make serious compromises to get preemption, even if it entails the enormous complexity and compliance costs associated with an EU-like regulatory regime for AI. Needless to say, smaller innovators won’t have much of a say in any of this and they’ll be absolutely crushed by the compliance burdens associated with the paperwork hell to come.

Nobody Has Any Idea What These Terms Even Mean

Keep in mind that absolutely nobody has yet figured out exactly how to even define what is meant by algorithmic “explainability.” As I pointed out in my earlier work,

algorithmic auditing will always be an inexact science because of the inherent subjectivity of the values being considered. Auditing algorithms is not like auditing an accounting ledger, where the numbers either do or do not add up. When evaluating algorithms, there are no binary metrics that can quantify the scientifically correct amount of privacy, safety or security in a given system.

Meanwhile, legislatively mandated algorithmic auditing could give rise to the problem of significant political meddling in speech platforms powered by algorithms. This is the so-called “weaponized government” problem that we hear so much about today, and AI auditing by government bureaucrats will just escalate this into an even bigger political shitstorm.

There are also various intellectual property considerations that will complicate AI auditing and explainability efforts more generally. If government forces AI innovators to open their algorithms up for some sort of public inspection, it could undermine the only source of value some of them have because their code means everything to their competitive advantage. Even if third-party auditors were doing the AI audits pursuant to government mandates, it still opens the door somewhat wider not only to the theft of trade secrets, but also to cybersecurity vulnerabilities.

Regardless, AI transparency and auditing will eventually become the regulatory endgame in the United States. It’ll take us some time to get there, but you can bank on this being the real fight to come.

_______

Additional Reading:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer