Raising Algorithmic Impact Assessment

There is an urgent need to develop methods for assessing algorithmic systems that center the public interest.

Jacob Metcalf
Data & Society: Points
6 min readMar 17, 2022

--

By Jacob Metcalf and Emanuel Moss

(Illustration: Lars Madsen.)

Access to basic social and economic goods is increasingly controlled through opaque, proprietary mathematical formulas. Whatever practical benefits algorithmic systems might produce, we, as a society, have no reliable means of understanding how these tools work, or don’t, or of making democratic decisions about when, how, and why they should be deployed. Globally, very few jurisdictions empower any type of democratic controls over these opaque tools. The situation is unacceptable.

Yet, If we want to exert democratic control, where to begin? We see an urgent need to develop methods for assessing algorithmic systems that center the public interest.

In pursuing ways of making algorithmic tools more accountable to the public interest, community advocates, scholars, developers, and regulators are increasingly looking to “algorithmic impact assessment” as a body of practices that can help minimize and mitigate the risks and harms of such tools. Calling for the impacts of algorithmic tools to be assessed is imperative, but currently there is no consensus about how to do so. Proposals for Algorithmic Impact Assessments are inspired by impact assessment rules from other domains, with the most familiar being environmental impact assessments and human rights impact assessments. But innovative methods for assessing the impacts of algorithmic tools specifically are sorely needed. We’re not alone in recognizing the need. The Ada Lovelace institute, Algorithmic Justice League, Mozilla Foundation and Open Source Audit Tooling Project, DAIR Institute, and others are doing crucial work to study and model effective assessment of algorithmic systems. At Data & Society, the AI on the Ground Initiative (AIGI) is learning from and building partnerships with community advocates, technical and qualitative researchers, and developers to incubate the algorithmic impact assessment methodologies that are needed to produce accountability for algorithmic systems. As we get started, we want to share a snapshot of our current thinking and where we’re headed. –Jacob Metcalf, AIGI Director, and Emanuel Moss, Postdoctoral Scholar

The momentum for Algorithmic Impact Assessments (AIAs) has grown in recent months. In February 2022, Sen. Ron Wyden (along with many co-sponsors) introduced the revised 2022 Algorithmic Accountability Act in the US Congress. This bill would require impact assessments when companies are using automated systems to make critical decisions; reports would be submitted to the Federal Trade Commission, which would then release key information via a publicly accessible repository. Across the Atlantic, the EU’s AI Act is currently being revised within the European Parliament. While it does not have an obligation to conduct an “algorithmic impact assessment,” the AI Act does require “conformity assessments” that would entail many of the same practices of AIAs as proposed in the US Congress. Together, these bills would radically change tech regulation by requiring developers to assess and report on the consequences of their “artificial intelligence” products and services for the first time, providing new opportunities for exerting democratic control over the algorithms that increasingly control social and economic opportunities and access.

There remains a major problem: Even if bills calling for AIAs passed tomorrow, there isn’t widespread agreement yet on how to do the assessment, how to report the impacts of that assessment in ways that lead to democratic governance of algorithmic systems, or how to effectively and fairly integrate the interests of impacted communities in such practices. While there have been some promising pilot studies and a preliminary AIA tool out of Canada, they are oriented toward documenting the fact that an algorithmic system is in use and has certain properties. They fall short of providing the basis for the public to understand how such systems are intended to work or how they might impact individuals, communities, and the environment.

How do we empirically justify what goes into an impact assessment? What are the best methods for providing that empirical basis? Once we decide what should count as an algorithmic impact, what is the best way to go about assessments so that the public interest, the interests of the most affected communities, and the best expertise goes into studying likely impacts?

As our previous research has shown, impact assessment in other domains has often taken an organic path, integrating input from regulatory agencies, advocacy groups, impacted communities, litigation and judicial decisions, and industries. These competing interests negotiate an evolving set of standards about what adequate and complete assessments consist of.

In no domain are impact assessments the be-all and end-all of regulatory action. For example, there are environmentally destructive projects approved after an environmental impact report is submitted. But the transparency and documentation structures of impact assessments should enable the public to know and contest the outcomes of proposed deployments. This would be a major step forward for algorithmic systems, where the current status quo is having almost no democratic leverage. And impact assessment regimes have an important secondary effect: they incentivize knowing and tracking much more about the behavior of these systems than we currently do.

Given the complexity of social life and the role of technology in shaping it, assessing how these systems actually impact people and communities will require new methods:

  • research methods for better understanding the technical and societal dimensions of algorithmic impacts;
  • collaborative methods for bringing together the concerns of assessors, developers, stakeholder groups, and communities around common or overlapping concerns;
  • documentation methods for reporting out the impacts of algorithmic systems in ways that make tradeoffs between benefits and risks visible; and
  • legal methods that protect the interests of stakeholders, providing a foothold for contesting where and how algorithmic systems are used, and that make algorithmic impact statements functional artifacts within existing and future accountability frameworks.

To answer the call for new methods for AIAs, we are in the process of developing a multidisciplinary research effort that invites partners from across civil society to help incubate impact assessment methods.

This effort will be dedicated to researching, testing, and propagating viable methods for measuring algorithmic impact in ways that produce public accountability. This is distinct from strictly technical auditing, which measures the performance of a system on its own terms, asking, “Does it do what the developers claim?” Rather, we will be asking, “What are the consequences of building and deploying this system?” This facilitates revisions to the system, development of alternative systems, and forms of redress for impacts, ultimately building footholds for public interest accountability regarding the impacts these systems have on individuals and communities.

Our focus will be on adapting inclusive, participatory research methods to develop workable, practical, and useful methods for understanding algorithmic harms in context and centered on the interests of impacted communities, on their own terms. These methods will need to bring together community advocates, technical auditors, social scientists, and tech companies and the forms of expertise (and interests) they hold. This effort will also need to include the expertise of legal scholars, agency administrators, and policy experts; their expertise needs to be incorporated to ensure that methods for impact assessment are compatible with existing (and proposed) legal and administrative frameworks.

One major feature of AIAs included in the Algorithmic Accountability Act — the direct inclusion of impacted communities — promises to be a potent means of ensuring real accountability. That power of inclusion can only be brought to bear with a robust process.

A central mission of this effort will be developing methods for foregrounding the lived expertise of directly affected communities in the set of impacts for which algorithmic systems might be assessed.

Setting a high bar for what a maximally robust impact assessment process ought to be — with full involvement of community groups, open-ended exploration for the types of algorithmic harms that can be assessed in an AIA process, and practical reporting that serves the public interest — our work will help serve as a counterbalance to the tendency for regulation to produce situations that allow industry actors to grade their own homework.

Ultimately, we plan to test the methods we develop by deploying them. By undertaking several trial algorithmic impact assessments, conducted in partnership with developers of algorithmic systems, we can “pressure test” the methods we develop and showcase how robust impact assessment in the public interest can work.

Regardless of how regulatory approaches come to use formal AIAs to govern technological development, methods for assessing algorithmic systems will be a crucial component among the array of measures needed for AI governance and accountability broadly. Our goal is to help ensure those methods serve the public interest.

Please stay tuned. To follow AI on the Ground Initiative efforts around Algorithmic Impact Assessment methods, subscribe to the Data & Society newsletter and follow @datasociety.

--

--