A New Executive Order Ties Equity in AI to a Broader Civil Rights Agenda

Here’s what should happen next.

Janet Haven
Data & Society: Points
6 min readFeb 23, 2023

--

In an executive order (EO) last week, President Biden made his administration’s mission to protect and fight against algorithmic discrimination central to its equity agenda. This EO, directing the federal government to do more to advance racial equity, builds on one the president issued on his first day in office, ordering his administration to center equity across all their work. Last week’s EO was heartening for anyone who cares about equity in America — and perhaps especially so for those of us who have been paying attention to the ways artificial intelligence and algorithmic decision-making systems increasingly shape our everyday lives. That’s because it makes a direct connection between racial inequality, civil rights and automated decision-making systems and AI, including newer threats like algorithmic discrimination. Understanding and acting on that connection is vital to advancing racial equity in America.

Government use of artificial intelligence and data-centric, automated systems have profound implications for individuals and society. From access to housing, healthcare and public benefits to the use of algorithmic risk-assessment scoring in the criminal legal system (which informs whether a defendant is offered bail or returned to jail to await trial) to decisions related to the path of a refugee through the US immigration system, the government’s use of opaque and often unaccountable algorithmic systems has been shown to have biased results, to violate privacy, and to increase surveillance, particularly for historically marginalized groups. An executive order that understands artificial intelligence as a central part of the work to advance equity in America is a true victory, one that is a product of ongoing work by scholars and activists who have raised the alarm again and again, as well as by thoughtful government officials who have incorporated this evidence into policy-making.

The executive order puts forth specific directives related to artificial intelligence and equity, including a definition of “algorithmic discrimination” taken directly from White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights, and a clear statement that “when designing, developing, acquiring, and using artificial intelligence and automated systems in the Federal Government, agencies shall do so …in a manner that advances equity.” The EO directs agencies to consult their civil rights offices “on decisions regarding the design, development, acquisition, and use of artificial intelligence and automated systems.” Further, it directs agencies to affirmatively advance civil rights, including “by protecting the public from algorithmic discrimination,” one of the five protections the Blueprint lays out.

Yet my enthusiasm is tempered with a dose of reality. There are some major challenges standing in the way of this EO’s ability to truly reshape the government’s approach to using AI.

First, we face a critical gap between skills and needs. Federal agencies are understaffed and overworked, and largely lack the interdisciplinary expertise that is required to make equity in AI a success. Often, this work is perceived as within the domain of technology, and we hear calls to bring more technologists into government. Ensuring that AI is used equitably in government should not be the exclusive domain of technologists — what we are facing is a societal issue, not a technological one. What we need in federal agencies are people and teams who bring together legal, technical, sociological and lived expertise across a range of contexts, from access to public benefits via algorithms to navigating a criminal legal system in which algorithms determine an individual’s future. As a nation, we’ve only begun experimenting with creating leadership roles that foreground this expertise. Dr. Alondra Nelson, who recently left the Office of Science and Technology Policy after two years of distinguished service, was our nation’s first senior official in the role of deputy director of science and society; as a distinguished scholar of science and technology, it’s no coincidence that it was under her leadership that the nation was introduced to the Blueprint for an AI Bill of Rights, the first statement from the federal government outlining an equity-focused, human-centered framework for governance of AI.

Second, we face a resource deficit. Professor Daniel Ho of Stanford’s Human-Centered Artificial Intelligence Institute, together with his students Christie Lawrence and Isaac Cui, looked at the outcomes of the AI in Government Act (2020) and two executive orders that followed (“AI Leadership” and “AI in Government”). They found that “implementation has been lacking,” pointing to a lack of capacity and resourcing to fill even basic transparency requirements. Last week’s EO tells us that the Biden administration does not believe that equitable and rights-respecting AI will happen without intentional action; it also will not happen without adequate resourcing to allow agencies to meet the president’s directives. Congress needs to act to allocate resources to meet these ambitious but critical plans, to bring in the kinds of new skillsets and teams mentioned above, and to deepen and expand action across all agencies.

Third, we face a methodology gap. The EO directs agencies to design, develop, acquire and use AI and automated systems in a manner that “advances equity.” But what does it mean for agencies to advance equity? What would it look like? What tools, approaches, benchmarks, and assessments should agencies have in place? The good news here is that we are not starting from scratch. The Blueprint for an AI Bill of Rights offered practical approaches to applying the principles it named, and work to explore protections against algorithmic and AI systems has already begun in a number of agencies. NIST’s recent AI Risk Management Framework offers additional pathways and models for process. Additionally, a robust field of scholarship has emerged in designing methodologies and practice for algorithmic accountability, from audits of algorithmic systems, which can assess whether a system does what it purports to do, to algorithmic impact assessments, which assess the often unseen impacts of a system on individuals and groups. Combining these types of tools with a human rights framework and clear benchmarks can provide a baseline for how agencies assess whether and how they are advancing equity, and what kinds of trade-offs they are making in adopting automation to address dynamic societal challenges and needs. But much more work needs to be done in order to develop these methodologies fully, and then to build them into policy, practice, and law with rigorous enforcement behind them.

The EO further directs agencies to affirmatively advance civil rights, in part by increasing “coordination, communication, and engagement with community-based organizations and civil rights organizations.” Again, this is a powerful statement from the president about the importance of participation — particularly of historically vulnerable and marginalized groups — to advance equity, including in governance of artificial intelligence. But community participation in algorithmic governance is often an afterthought, dismissed because of the assumption of the need for technical expertise, or out of fear about “scale” given how wide-spread the use of highly impactful automated decision-making systems is. Other sectors — notably the environmental justice movement — have shown us that meaningful participation is possible and should be built in from the beginning; as with accountability methodologies, a field of scholarship and practice has emerged around public participation in algorithmic governance that can provide guidance to the federal government.

This executive order represents a powerful commitment from this administration, tying equity in the use of AI to an overall equity agenda. But ultimately, EOs will not be enough. Congress needs to act to protect the American public from algorithmic discrimination, unsafe and ineffective systems, abusive data practices, and the negative impacts of AI by companies as well as the federal government. They could start by passing the American Data Protection and Privacy Act, and the Algorithmic Accountability Act of 2022. They could examine the steps that the EU is taking in the landmark EU-AI Act to enshrine critical protections and accountability measures in law, and work toward American legislation with similar goals.

AI is a tool that can advance the public interest if used carefully and judiciously, but can too easily bring about real harms — foreclosure of opportunity, loss of access to justice, and even loss of life — in the absence of that care and caution. President Biden’s executive order recognizes this, and gestures toward what rights-respecting technology policy could look like in practice. The key will be implementation and enforcement.

--

--