Addressing Algorithmic Bias On All Fronts

by Koustubh “K.J.” Bagchi & Livia Luan

Artificial intelligence has transformed the way we make decisions. We listen to playlists on Spotify that are recommended to us based on our music selections. Additionally, we purchase products on Amazon without realizing that algorithmic recommendations influence over one third of our choices on the platform.

Although these algorithms simplify our lives, certain forms of advanced algorithmic decision-making perpetuate bias and create real-world harm. Risk assessment tools, for example, evaluate a criminal defendant’s background in order to produce a recidivism score that a judge can consider while making a sentencing decision. Since these tools often use algorithms trained on historical crime data that are shaped by the disproportionate targeting of low-income communities and communities of color, they generate patterns that reflect racial disparities in the criminal justice system. According to the MIT Technology Review, the potential for real-world harm becomes tangible when these correlations are interpreted as causations, resulting in high recidivism scores for members of these communities and consequently “[perpetuating] embedded biases and [generating] even more bias-tainted data to feed a vicious cycle.”

The root of this problem is algorithmic bias, which can develop during various stages of a computer’s deep learning process. The MIT Technology Review has identified three key stages: problem-framing, data collection, and data preparation. First, as computer scientists determine what a deep learning model should achieve, they might select a goal without considering whether it produces discrimination as a byproduct. Second, they may use training data that are biased, in that the selected data set is unrepresentative of the U.S. population and/or reflects longstanding prejudices. Finally, in deciding which attributes the algorithm should consider or ignore, such as age, gender, or income, computer scientists can influence the model’s prediction accuracy.

In an effort to address algorithmic bias, lawmakers at all levels of government are working on legislation that seeks to promote transparency and accountability. Sponsored by Senators Cory Booker and Ron Wyden in the Senate and by Representative Yvette Clarke in the House of Representatives, the Algorithmic Accountability Act would require companies to determine if the algorithms powering their tools are biased or discriminatory and if they threaten consumers’ privacy or security. At the local level, the New York City Council in 2017 passed a bill establishing a task force, which has since examined the city’s existing technologies and released a checklist for determining whether a tool or system is an “automated decision system” as defined by local law. Moreover, at the state level, lawmakers in the Washington House of Representatives held a hearing in February 2019 on a bill that would establish guidelines for the procurement and use of automated decision systems in government and, if passed, would represent the first statewide regulation of algorithms in the country.

While legislative activity is certainly a sign of progress, digital-based companies should take proactive steps to tackle algorithmic bias at each stage of the algorithmic development process by adopting self-regulatory best practices for reducing consumer harms, including those outlined by the Brookings Institution’s experts. In the problem-framing stage, operators of algorithms should draft a bias impact statement, in which they identify the algorithm’s purpose, process, and production, as well as any potential negative or unintended consequences that may result and for whom. Operators should also adopt inclusive design principles by considering the role of diversity within their work teams and training data, and the level of cultural sensitivity within decision-making processes. In addition, operators should establish a regular process of auditing algorithms for bias and invite feedback from developers, civil society organizations, and impacted parties. Furthermore, the creation of cross-functional work teams, including experts from different departments and disciplines, could facilitate a more thorough review of algorithms.

As researchers continue to study the nuances of algorithmic bias, some are even designing algorithms and processes to mitigate it. One research paper argues that “data collection is often a means to reduce discrimination without sacrificing accuracy” and develops a procedure for analyzing different sources that contribute to discrimination. Another proposes a “novel, tunable algorithm for mitigating the hidden, and potential unknown, biases within training data.” Additionally, the Gender Shades project, created by algorithmic researcher Joy Buolamwini, represents a process for evaluating the accuracy of gender classification products powered by artificial intelligence.

Beyond the ongoing work of lawmakers and researchers, civil society organizations and the average consumer can also take steps to address algorithmic bias. In addition to urging digital-based companies to set high standards for their employees by implementing the aforementioned policies, they should regularly engage them in conversations about the importance of incorporating diversity and equity into their work. By raising these issues, civil society can help programmers and other employees at technology companies become more attuned to the significance of algorithmic bias, as well as how it manifests itself in the algorithmic development process, and hopefully become more willing to address it. These measures are critical for ensuring that artificial intelligence represents a tool for social progress.

Koustubh “K.J.” Bagchi is the Senior Counsel for Telecommunications, Technology, and Media at Asian Americans Advancing Justice | AAJC. Livia Luan is the Programs Associate and Executive Assistant at Advancing Justice | AAJC, where she supports the telecommunications, technology, and media program on rapidly evolving issues such as digital privacy, digital equity, and facial recognition technology.

--

--

Advancing Justice – AAJC
Advancing Justice — AAJC

Fighting for civil rights for all and working to empower #AsianAmericans to participate in our democracy.