NEPA for Al? The Problem with Mandating Algorithmic Audits & Impact Assessments

Adam Thierer
20 min readApr 23, 2023


image made with DALL-E

Proposals to regulate artificial intelligence (AI) and machine learning (ML) technologies are multiplying rapidly. Many of the leading measures being floated or which are already advancing propose some variant of algorithmic auditing and AI impact assessments as the primary regulatory mechanism. As I’ll note below, audits and impact assessments have been utilized in many other contexts, but one particularly relevant model can be found in the National Environmental Policy Act (NEPA), which was created over 50 years ago. Many academics are now calling for a NEPA-like model for algorithmic services, and various AI legislative and regulatory proposals are already being floated that would build on the NEPA framework.

At the federal level, the U.S. Department of Commerce recently launched a proceeding on “AI accountability,” and in Congress, Senate Majority Leader Chuck Schumer (D-N.Y.) is rumored to be readying a new law to legislate “responsible AI.” Meanwhile, previous proposals (like the big baseline privacy bill and the “Algorithmic Accountability Act”) included algoirithmic impact assessment requirements.

At the state level, state and local bills focus on addressing potential algorithmic bias in automated hiring, among other things. (See, for example, this proposed California bill and these new AI regulations from New York City.) These efforts all generally seek to advance AI transparency, explainability, and fairness, although those terms are almost never defined. Importantly, these efforts also generally require some sort of auditing or impact assessment mechanism to achieve those amorphous objectives. Meanwhile, through its forthcoming AI Act, the European Union is advancing a more aggressive type of ex ante auditing regime in the form of “prior conformity assessments,” which are like permission slips that algorithmic innovators will need before releasing new products.

The R Street Institute just released a major new report I authored on “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence,” which discussed the many AI safety efforts underway today and the wide variety of governance mechanisms that are already available to address concerns related to algorithmic risks. The 40-page report explains that what unifies almost all these governance efforts is a desire to “align” algorithmic systems with human values by ensuring that they are as safe, secure, effective, and free of bias as is possible. In practice, this means finding ways to: (1) “bake in” widely-shared goals and values through a process of ethics-by-design; and, (2) consider how to keep humans in the loop at critical stages of this process to ensure that they can continue to guide and occasionally realign algorithmic systems as needed.

Inevitably, however, the critical issue that arises is exactly how these two objectives get translated into concrete governance guidelines and policy deliverables. Will the AI alignment process take on a highly regulatory character, or can it mostly remain in the realm of more decentralized governance approaches that involve best practices, private certifications, targeted ex post enforcement of existing policies, and various common law remedies? That question has already come to the fore with the many regulatory proposals being floated at the federal, state, and local level in the United States, and by many nations globally.

In my new R Street report, I unpack the various competing approaches and come down in favor of a more decentralized, iterative, and agile approach to algorithmic oversight. “The optimal governance approach for algorithmic systems should seek to establish certain best practices for development and use without foreclosing the important benefits associated with these technologies,” I argue. While I generally oppose making the Precautionary Principle the baseline default for AI innovations, it can nonetheless help guide the governance of these technologies in a broader sense — and without immediately defaulting to a highly regulatory, top-down, permission-slip based regime for all future algorithmic innovations.

What role, then, do algorithmic audits and AI impact assessments play in a more decentralized governance regime? That question is explored in my latest R Street Institute report beginning on page 27. I have reproduced that section in its entirety below. I have included all the footnotes, too, but please consult the full report for additional supporting documentation and analysis because this portion of the paper cross-references other parts of the study.

One thing I find astonishing, however, is the way that the scholars recommending “NEPA for algorithms” spend almost no time reviewing the costs associated with the actual NEPA regulatory process itself. Perhaps they are not aware of all that literature, or more realistically, perhaps they are just choosing to intentionally ignore it.

Regardless, I’ll have more to say about AI audits and algorithmic impact assessments in future papers and an upcoming filing to the NTIA. In terms of mandates on this front, about the furthest I would be willing to go would be a soft mandate requiring that AI developers conduct occasional audits through independent (non-governmental) third parties and then make those reports publicly available (but without divulging important trade secrets or sensitive information). But there are lots of other caveats and considerations. Read on for more details. But make no doubt about it, AI audits and impact assessments will become the fundamental battleground in the world of artificial intelligence policy in coming months and years.


The following is excerpted from: Adam Thierer, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence,” R Street Institute Policy Study №283 (April 2023), p. 27–33.

The Ins and Outs of Algorithmic Auditing and AI Impact Assessments

The professionalization of AI ethics could be further formalized through algorithmic auditing and AI impact assessments.[1] Other business sectors use audits and impact assessments to address safety practices, financial accountability, labor practices, human rights issues, supply chain practices and various environmental concerns. AI audits and impact assessments would require those who develop or deploy algorithmic systems to conduct reviews to evaluate how well aligned the systems were with various ethical values or other commitments.[2] These evaluations could be conducted before or after a system launch, or both. Governments, private companies and any other institution developing or deploying algorithmic systems could employ such audits or assessments.[3]

Many complexities exist, however. Algorithmic audits and impact assessments face the same sort of definitional challenges that pervade AI more generally. For example, what constitutes a risk or harm in any given context will often be a complicated and contentious matter. In some cases, the potential harm or impact on a group might be easier to assess, such as when so-called predictive policing algorithms are used by law enforcement officials or the courts to judge or sentence individuals from certain marginalized groups.[4] Governmental uses of algorithmic processes will always raise greater concern and require greater oversight because governments possess coercive powers that private actors do not.

The focus here, however, will be on how audits or assessments might be used to address private-sector uses of AI and ML that give rise to concerns about privacy, safety, security or bias. Many current academic proposals for algorithmic auditing regimes imagine that this must be a formal regulatory certification process, modeled after other existing regulatory regimes.[5] For example, some of the scholars advocating for these ideas want to use the National Environmental Policy Act (NEPA) as a model.[6] Passed in 1969, NEPA requires formal environmental impact statements for major federal actions “significantly affecting the quality of the human environment.”[7] Many states have adopted similar requirements.

U.S. policymakers are already floating bills that would mandate algorithmic auditing and impact assessments. Once such measure, the Algorithmic Accountability Act of 2022, proposed that developers perform impact assessments and file them with the FTC. The Act creates a new Bureau of Technology inside the FTC to oversee the process. The law would also “require each covered entity to attempt to eliminate or mitigate, in a timely manner, any impact made by an augmented critical decision process that demonstrates a likely material negative impact that has legal or similarly significant effects on a consumer’s life.”[8] Similar algorithmic auditing requirements are also included in the American Data Protection and Privacy Act of 2022, a comprehensive federal privacy proposal that attracted widespread bipartisan support.[9] The proposed law would require large data handlers to perform an annual algorithm impact assessment that includes a “detailed description” of both “the design process and methodologies of the covered algorithm,” as well as a “steps the large data holder has taken or will take to mitigate potential harms from the covered algorithm.”[10]

The full scope of this sort of mandate remains to be seen. If enforced through a rigid regulatory regime, compliance with algorithmic auditing mandates would likely become a time-consuming, convoluted, bureaucratic process that would likely significantly slow the pace of AI development. Unfortunately, most of the academic literature surrounding algorithmic auditing fails to discuss the potential costs associated with the paperwork burdens and compliance delays that would likely be associated with such a regulatory regime. Advocates of auditing mandates insist that “increasingly robust regulatory requirements” will mean that “the public will have greater confidence in using highly automated systems,” but they typically fail to consider whether those systems will even be developed if they are preemptively suffocated by layers of red-tape requirements and lengthy approval timetables.[11]

Consider the complexities of NEPA. Although well-intentioned, NEPA environmental impact statements create significant compliance costs and project delays.[12] NEPA assessments were initially quite short (sometimes less than 10 pages), but, today, the average length of these statements exceeds 600 pages and can include appendices that push the total over 1,000 pages.[13] Moreover, these assessments take an average of 4.5 years to complete; some have taken 17 years or longer.[14] What this means in practice is that many important public projects are not completed, or they take much longer to complete at considerably higher expenditure than originally predicted. For example, NEPA has slowed many infrastructure projects and clean energy initiatives, and even Democratic presidential administrations have suggested the need to reform the assessment process due to its rising costs.[15]

The author of Construction Physics referred to NEPA as an “anti-law” in the sense that it largely accomplishes the exact opposite of what the underlying statute intended.[16] Instead of creating predictability, the law “greatly reduces predictability and increases coordination cost and risk, because it’s so unclear what’s needed to meet NEPA requirements,” he says.[17] Politicization is also a serious problem because NEPA “seems easily captured by small groups with strongly held opinions” who stand ready to block almost all progress on important projects and, therefore, “is effectively a bias towards the status quo.”[18] Sadly, it is not clear that the law does anything to improve environmental outcomes because it makes it so difficult for many important initiatives to be completed in a timely or effective manner — assuming they are allowed to move forward at all. “The NEPA process is effectively a tax on any major government action, and like any tax, we’d expect it to result in less of what it taxes.”[19] NEPA’s laboriously complicated and slow permitting processes — and the failure of policymakers to address them — have led to questions about whether some in the environmental movement are concerned more about the process itself rather than concrete results. An Atlantic reporter suggested that “many people within the environmentalist movement are undermining the nation’s emissions goals in the name of localism and community input.”[20]

For similar reasons, applying the NEPA model to algorithmic systems would likely grind AI innovation to a halt in the face of lengthy delays, paperwork burdens and significant compliance costs.[21] Converting audits into a formal regulatory process would also create several veto points that opponents of AI could use to slow progress in the field. Many scholars today decry the United States’ growing culture of “vetocracy,” which describes the many veto points within modern political systems that hold back innovation, development and economic opportunity.[22] This endless accumulation of potential veto points in the policy process in the form of mandates and restrictions can greatly curtail innovation opportunities. NEPA-like algorithmic auditing mandates would create many such veto points within the product development process.

Algorithmic systems evolve at a very rapid pace and undergo constant iteration, with some systems being updated on a weekly or even daily basis. One AI analyst observed that “algorithms can be fearsomely complex entities to audit” because of the combination of their daunting size, complexity and obscurity.[23] Society cannot wait years or even months for bureaucracies to eventually get around to formally signing off on audits or assessments, many of which would be obsolete before they were completed. Many AI developers would likely look to innovate elsewhere if auditing or impact assessments became a bureaucratic and highly convoluted compliance nightmare.

Additionally, algorithmic auditing will always be an inexact science because of the inherent subjectivity of the values being considered. Auditing algorithms is not like auditing an accounting ledger, where the numbers either do or do not add up. When evaluating algorithms, there are no binary metrics that can quantify the scientifically correct amount of privacy, safety or security in a given system.

Legislatively mandated algorithmic auditing could give rise to the problem of significant political meddling in speech platforms powered by algorithms. In recent years, both Republican and Democratic lawmakers have accused digital technology companies of manipulating algorithms to censor their views. For example, during a heated 2022 debate over a bill to regulate algorithmic content moderation, lawmakers from both parties accused social media companies of censoring them or their favored content.[24] Aside from the fact that both sides cannot be right, the fact that they all want to use government leverage to influence private content management decisions illustrates the danger of mandatory algorithmic auditing. Whichever party is in power at any given time could use the auditing process to politicize terms like “safety,” “security” and “nondiscrimination” to nudge or even force private AI developers to alter their algorithms to satisfy political desires.

Political shenanigans of this sort happened at the FCC when the agency abused its ambiguous authority to regulate “in the public interest” and indirectly censored broadcasters through intimidation.[25] The agency would send radio and television broadcasters letters of inquiry (LOIs) asking about programming decisions and not-so-subtly suggesting how the stations might want to reconsider what they put on the air. This tactic was used frequently enough that it came to be known in policy circles as “regulation by raised eyebrow,” or “regulatory threats that cajole industry members into slight modifications” of their programming con­tent.[26] This became an effective way for the FCC to avoid First Amendment battles that would ensue in the courts if the agency had taken formal steps to revoke the license of a broadcaster. The agency used the LOIs in combination with jawboning tactics and other threats in speeches and public statements to shape industry speech decisions. Congressional lawmakers also used these same jawboning tactics in hearings and public statements to influence private content choices.[27] These tactics were used in other ways during merger reviews or other regulatory processes when policymakers realized that they possessed leverage to extract demands from private parties.[28]

It is not a stretch to imagine how regulators or lawmakers could use mandated algorithmic audits or impact statements to unduly influence AI decision-making in similar ways. We have already witnessed intense debates over what constitutes online “disinformation” following a short-lived Biden administration effort to create a Disinformation Governance Board within the Department of Homeland Security.[29] If a new algorithmic oversight law or agency were created, similar fights would ensue. While not explored here, there are potentially profound First Amendment issues at play with the regulation of algorithms. These considerations could become a major part of AI regulatory efforts going forward if the AI auditing process were mandated and then became politicized in this fashion.[30]

Algorithmic Auditing Done Right

Despite these problems, algorithmic auditing and AI impact assessments can still be a part of a more decentralized, polycentric governance framework and can help innovations by “ensuring that programs are not inadvertently ‘learning’ the wrong lessons from the information entered into the systems.”[31] Algorithmic audits can help developers constantly improve their systems and avoid damaging market losses or liability threats.

Even in the absence of any sort of hard-law mandates, algorithmic auditing and impact reviews represent a sensible way to help formalize the ethical frameworks and best practices already formulated by professional associations such as the IEEE, ISO, ACM and others. Once again, the focus of those efforts is to get developers to think more seriously about how to bake in widely-shared goals and values and consider how to keep humans in the loop at critical stages of this process to ensure that they can continue to guide and occasionally realign those values as needed.

Such an auditing and impact assessment process can be rooted in the voluntary risk assessment frameworks that the OECD and the NIST have been formulating. The OECD has developed a Framework for the Classification of AI Systems with the goals of helping “to develop a common framework for reporting about AI incidents that facilitates global consistency and interoperability in incident reporting,” and advancing “related work on mitigation, compliance and enforcement along the AI system lifecycle, including as it pertains to corporate governance.”[32]

NIST also recently released a comprehensive Artificial Intelligence Risk Management Framework, which is a voluntary, consensus-driven guidance document intended “to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.”[33] The Framework builds on the ethical frameworks developed by the many different organizations mentioned earlier, such as the IEEE, ISO and ACM.

Many AI developers and business groups have endorsed the use of such audits and assessments. BSA|The Software Alliance has said that “[b]y establishing a process for personnel to document key design choices and their underlying rationale, impact assessments enable organizations that develop or deploy high-risk AI to identify and mitigate risks that can emerge throughout a system’s lifecycle.”[34] As noted below, developers can still be held accountable for violations of certain ethical norms and best practices through both private and formal sanctions by consumer protection agencies (like the FTC or comparable state offices) or by state attorneys general.

Independent AI auditing bodies are already developing and could play an important role in helping to professionalize AI ethics going forward. EqualAI is a group that works with lawyers, businesses and policy leaders to create and monitor ethical AI best practices. In collaboration with the WEF, EqualAI is creating a “Responsible AI Badge Certification” program.[35] The WEF has recently produced two major reports that can guide such efforts: “Empowering AI Leadership: AI C-Suite Toolkit” and “A Blueprint for Equity and Inclusion in Artificial Intelligence.”[36] Meanwhile, the WEF is also involved in a partnership with AI Global, a nonprofit organization focused on advancing the responsible and ethical adoption of AI, and the Institute for Technology and Society at the University of Toronto to “create a globally recognized certification mark for the responsible and trusted use of AI systems.”[37]

According to The Institute of Internal Auditors (IIA), a widespread internal auditing profession already exists, with professional auditors “identifying the risks that could keep an organization from achieving its goals, making sure the organization’s leaders know about these risks, and proactively recommending improvements to help reduce the risks.” The IIA collectively represents these auditors, helps establish standards for the profession and awards a Certified Internal Auditor designation through rigorous examinations.[38] Eventually, more and more organizations will expand their internal auditing efforts to incorporate AI risks because it makes good business sense to stay on top of these issues to help avoid liability, negative publicity or other customer backlash.[39] “To win customer, regulator, and investor trust,” a journalist explained, “AI companies need to address these concerns proactively, rather than waiting for regulations.”[40]

Meanwhile, the field of algorithmic consulting continues to expand and will supplement these efforts with tailored expert oversight on technical, ethical and legal matters. For example, a leading AI social scientist, created O’Neil Risk Consulting and Algorithmic Auditing to help organizations manage and audit algorithmic risks — specifically those pertaining to fairness, bias and discrimination.[41] The legal profession will also expand its focus to assist potential clients on these matters. Launched in 2020, describes itself as a “boutique law firm that leverages world-class legal and technical expertise to help our clients avoid, detect, and respond to the liabilities of AI and analytics.”[42] Other specialized AI law firms like this are sure to develop in coming years.

Another benefit of voluntary AI auditing and impact assessments is that they can also have global reach when companies and trade associations adopt principles and frameworks like those described earlier. Finally, the governance mechanisms discussed here will continue to be supplemented by various hard-law legal remedies to hold developers to the promises they make to the public while also addressing more serious AI harms that emerge or prove too challenging for soft law to address.


Note: Please consult my longer R Street Institute study to examine the various other governance mechanisms I discuss at length there. After this section on audits and impact assessments, my report concludes with an explanation of the many existing ex-post legal mechanisms that can complement various AI soft-law governance approaches. As I noted in a recent Hill oped calling for “A balanced AI governance vision for America,” many government policies and bodies already exist to address algorithmic concerns:

The U.S. has 15 Cabinet agencies, 50 independent federal commissions, and over 430 federal departments altogether, many of which already consider how AI touches their field. Consumer protection agencies, like the Federal Trade Commission and comparable state offices, are also taking steps to oversee potentially unfair and deceptive algorithmic practices. Regulatory agencies like the National Highway Traffic Safety Administration, the Food and Drug Administration, and Consumer Product Safety Commission also have broad oversight and recall authority, allowing them to remove defective or unsafe products from the market.

We should take first look to tap those many existing legal solutions before adding many layers of paperwork-intensive regulation that would undermine important algorithmic innovations that have the potential to boost human flourishing along multiple dimensions.

Additional Reading:


[1] Rich Ehisen, “Could Algorithm Audits Curb AI Bias?” State Net Insights, Feb. 18, 2022.; Ilana Golbin, “Algorithmic impact assessments: What are they and why do you need them?” pwc, Oct. 28, 2021.

[2] Jacob Metcalf et al., “Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts,” FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (March 2021), pp. 735–746.

[3] Dillon Reisman et al., “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” AI Now, April 2018.

[4] Jamie Grierson, “Predictive policing poses discrimination risk, thinktank warns,” The Guardian, Sept. 15, 2019.

[5] Andrew D. Selbst, “An Institutional View Of Algorithmic Impact Assessments,” Harvard Journal of Law & Technology 35 (Fall 2021), pp. 117–191.

[6] Emanuel Moss et al., “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest,” Data & Society (June 29, 2021).

[7] “National Environmental Policy Act,” United States Environmental Protection Agency, July 6, 2022.

[8] H.R.6580, “Algorithmic Accountability Act of 2022,” 117th Congress.

[9] H.R.8152, “American Data Privacy and Protection Act,” 117th Congress.

[10] American Data Protection and Privacy Act, § 207(c)(1).

[11] Gregory Falco et al., “Governing AI safety through independent audits,” Nature Machine Intelligence 3 (2021), p. 570.

[12] Eli Dourado, “Why are we so slow today?,” The Center for Growth and Opportunity, Mar. 12, 2020.

[13] Ibid.

[14] Ibid.

[15] Ibid.

[16] Brian Potter, “How NEPA works,” Construction Physics, Aug. 19, 2022.

[17] Ibid.

[18] Ibid.

[19] Ibid.

[20] Jerusalem Demsas, “Not Everyone Should Have a Say,” The Atlantic, Oct. 19, 2022.

[21] Philip Rossetti, “Addressing NEPA-Related Infrastructure Delays,” R Street Policy Study №234 (July 2021).; Jeremiah Johnson, “The Case for Abolishing the National Environmental Policy Act,” Liberal Currents, Sept. 6, 2022.

[22] William Rinehart, “Vetocracy, the costs of vetos and inaction,” The Center for Growth and Opportunity at Utah State University, March 24, 2022,; Adam Thierer, “Red tape reform is the key to building again,” The Hill, April 28, 2022.

[23] James Kobielus, “How We’ll Conduct Algorithmic Audits in the New Economy,” InformationWeek, Mar. 4, 2021.

[24] Adam Thierer, “Left and right take aim at Big Tech — and the First Amendment,” The Hill, Dec. 8, 2021.

[25] Randolph J. May, “The Public Interest Standard: Is It Too Indeterminate to Be Constitutional?,” Federal Communications Law Journal 53:3 (May 2011), pp. 427–468.

[26] Thomas Streeter, Selling the Air: A Critique of the Policy of Commercial Broadcasting in the United States (The University of Chicago Press, 1996), p. 189.

[27] Jerry Brito, “’Agency Threats’ and the Rule of Law: An Offer You Can’t Refuse,” Harvard Journal of Law & Public Policy 37:2 (2014), p. 553.

[28] Thierer, “Soft Law in ICT Sectors: Four Case Studies,” pp. 94–96.

[29] Adam Thierer and Patricia Patnode, “Disinformation About the Real Source of the Problem,” Real Clear Policy, May 23, 2022.

[30] Stuart Minor Benjamin, “The First Amendment and Algorithms,” in Woodrow Barfield, ed, The Cambridge Handbook of the Law of Algorithms (Cambridge University Press, 2021), pp. 606–631.

[31] Keith E. Sonderling et al., “The Promise and The Peril: Artificial Intelligence and Employment Discrimination,” University of Miami Law Review 77:1 (2022), p. 80.

[32] “OECD AI Principles overview,” OECD.AI, last accessed March 3, 2023.; “OECD Framework for the Classification of AI Systems,” OECD, Feb. 22, 2022, p. 6.

[33] “NIST Risk Management Framework Aims to Improve Trustworthiness of Artificial Intelligence,” NIST, Jan. 26, 2023, p. 2.

[34] “Enhancing Innovation and Promoting Trust: BSA’s Artificial Intelligence Policy Agenda,” BSA | The Software Alliance, 2022, p. 2.

[35] Kay Firth-Butterfield and Miriam Vogel, “5 ways to avoid artificial intelligence bias with ‘responsible AI,’” World Economic Forum, July 5, 2022.

[36] “Empowering AI Leadership: AI C-Suite Toolkit,” World Economic Forum, Jan. 12, 2022.; “A Blueprint for Equity and Inclusion in Artificial Intelligence,” World Economic Forum, June 29, 2022.

[37] Jovana Jankovic, “U of T’s Schwartz Reisman Institute and AI Global to develop global certification mark for trustworthy AI,” Dec. 1, 2020.

[38] “All in a Day’s Work: A Look at the Varied Responsibilities of Internal Auditors,” The Institute of Internal Auditors, last accessed March 3, 2023.

[39] Jeff Bleich and Bradley J. Strawser, “Tool or Trouble: Aligning Artificial Intelligence with Human Rights,” Harvard Advanced Leadership Initiative, April 25, 2022.

[40] Karen Hao, “Worried about your firm’s AI ethics? These startups are here to help,” MIT Technology Review, Jan. 15, 2021.

[41] “It’s the Age of the Algorithm and We Have Arrived Unprepared,” ORCAA, last accessed March 3, 2023.

[42] “Why BNH,” BNH.AI, last accessed March 3, 2023.; Seth Colaner, “ is a new law firm focused only on AI,” Venture Beat, Mar. 19, 2020.



Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance.