Accountable Algorithmic Futures

Building empirical research into the future of the Algorithmic Accountability Act

Andrew Selbst
Data & Society: Points
5 min readApr 19, 2019

--

By Data & Society Postdoctoral Scholar Andrew Selbst and Research Leads Madeleine Clare Elish and Mark Latonero

Last week, in an important development in the push for accountability around algorithmic decision making, Senators Cory Booker and Ron Wyden and Representative Yvette Clarke introduced the Algorithmic Accountability Act of 2019. The Act aims to address algorithmic and privacy harms through an “automated decision systems impact assessment” (ADSIA). More specifically, it authorizes and commands the Federal Trade Commission to issue regulations, which in turn, will require larger companies to conduct these impact assessments.

We’re excited for this step towards accountability. Impact assessments have been around as a policy tool since the National Environmental Policy Act (NEPA) took effect in 1970. The tool is designed to be a “systematic, interdisciplinary approach” to using the sciences to inform policy. Since then, impact assessments have been used in a wide variety of contexts (e.g. privacy, data protection, sentencing, legislative budgeting) and at all levels of government. The bill applies a well-established regulatory model to the new problem of algorithmic decision-making.

We also believe empirical research will be crucial to implementing effective regulation. At Data & Society, the AI on the Ground Initiative is focused on centering ethnographic methods of empirical inquiry in AI research, with the goal of informing AI policy.

This bill is therefore a great launching point for accountability, while still leaving a lot open for future regulators to flesh out.

The ADSIA requires, at a minimum “a detailed description of the automated decision system, its design, its training, data, and its purpose.” The rest of the bill’s requirements are left open, described as “assessments” of costs, benefits, and risks, as well as descriptions of the measures taken by companies to address those risks. This is a form of legislation very common in the administrative state: Create a standard, include some baseline requirements, and leave the remaining details to a more nimble regulator (here, the FTC) to figure out when it issues regulations. In fact, NEPA itself followed this model, setting some baselines, establishing the White House Council on Environmental Quality, and then directing that body to come up with the specifics we know today.

This bill is therefore a great launching point for accountability, while still leaving a lot open for future regulators to flesh out. For example, what will count as sufficient detail? Who decides whether “consultation with external third parties” is “reasonably possible?” What do companies have to do to “reasonably address … the results of the impact assessments?” If this bill passes, the details will wholly determine its effectiveness, so the next step will be to start thinking about how it will play out in practice.

This is where social science comes in. To effectively implement the regulations, we believe that engagement with empirical inquiry is critical. But unlike the environmental model, we argue that social sciences should be the primary source of information about impact. Ethnographic methods are key to getting the kind of contextual detail, also known as “thick description,” necessary to understand these dimensions of effective regulation.

To effectively implement the regulations, we believe that engagement with empirical inquiry is critical.

At the moment, we are thinking about two types of questions that will be important to the regulations.

First, we are interested in questions about the general effectiveness of impact assessments as a tool. Environmental impact statements in particular have had their proponents and detractors over the years. Opponents argue that impact statements are costly, that they allow opponents to tie up projects in litigation for years, or that because of how the regulations are structured, the requirement to perform impact statements is too easily avoided. Proponents counter that the transparency itself is valuable, and that a primary goal is to get bureaucrats and project managers to consider concrete risks earlier in a project. So even when impact assessments are avoided by changing the project, they may be still be succeeding. In the case of algorithmic impact assessments, it will be beneficial to understand the ways in which existing forms of impact assessments achieve their policy aim, and where they may fall short.

By investigating how government actors or those involved in advocacy have used, and continue to use and think about impact assessments, we can better understand what does or does not make them effective as a tool, and tailor the new regulations in a way that learns from the past.

A sociological lens can help illuminate appropriate points of institutional intervention.

Another dimension we need to understand more clearly are the everyday practices and needs of the different organizational actors involved in building algorithmic decision-making systems. We need to understand how people within the relevant companies conceptualize, design, and implement them. What many policymakers and regulators often fail to appreciate is the importance of the cooperation of the subjects of the regulations to making the regulations work. For instance, legal scholar and sociologist Lauren Edelman has articulated the concept of “legal endogeneity” to describe how, when regulations are flexible and the regulated company decides how to comply, they will often undermine the purpose of the regulation, turning it into a meaningless checklist. If the regulations require designers or operators of automated decisions systems to conceptualize or document their work in ways that do not align with how they make sense of their job, they may resist by failing to comply or by complying in minimalistic, unhelpful ways. A sociological lens can help illuminate appropriate points of institutional intervention or types of considerations that need to be reflected in the specific requirements of a given regulation.

As the Senate, Congress, and the FTC consider regulation, we encourage them to interact with the growing community of scholars studying how to evaluate and best plan for algorithmic decision making. Specifically, they should rely not only on legal and policy analysis, but also to take into consideration the sociological dimensions of putting good policy into practice.

Authors’ note: Data & Society Fellow Mutale Nkonde was among those who briefed Rep. Clarke in relation to the Algorithmic Accountability Act. Read her post about public interest technologists.

While she is a colleague of the authors, “Accountable Algorithmic Futures” reflects the views of the authors alone.

--

--

Andrew Selbst
Data & Society: Points

Postdoc at Data & Society & future law prof at UCLA. Researching law and tech, with a current focus on AI/ML and regulation/liability.