From http://interactioninstitute.org/illustrating-equality-vs-equity/

Putting the J(ustice) in FAT

Ben Green
Berkman Klein Center Collection
6 min readFeb 26, 2018

--

On Day 2 of the Conference on Fairness, Accountability, and Transparency (FAT*), keynote speaker Deborah Hellman opened with an anecdote from philosopher Sidney Morgenbesser. Reflecting on his experience with the police during the Columbia sit-ins in the 1960s, Morgenbesser explained, “It was unjust that the police beat me up, but it wasn’t unfair because they beat everyone up.”

Hellman used this story to motivate a thoughtful discussion of fairness and justice, evaluating the relationship between the two. She emphasized that fairness is a comparative claim — it depends on how other people are treated — while justice is a non-comparative claim — it depends solely on how you are treated. Walking the audience through several thought experiments, Hellman pointed out that many concerns about accuracy and fairness are actually rooted in deeper concerns that relate to justice.

But the talk lacked the critical punchline: because much of what we intuitively conceive of as “unfair” is actually “unjust,” FAT researchers need to more deeply consider social justice as an essential component of their work. Put another way, many of the concerns about biased algorithms are actually concerns about the impacts of predictions and the systems that those algorithms enhance. A predictive policing algorithm might perfectly forecast where crime will occur (hence satisfying notions of fairness), but if those predictions are used by police departments to harass and oppress communities of color, then the algorithm is unjust.

Notably, the conference’s two most powerful sessions were ones that framed the discussion of algorithmic systems within a broad context of social justice. First, Chelsea Barabas gave a powerful talk about how risk predictions naturalize structural oppression and perpetuate the negative impacts of the criminal justice system. Shortly after that, Kristian Lum and Elizabeth Bender provided a tutorial on pre-trial detention and the many injustices within the criminal justice system. Unlike the rest of the FAT*, which was largely defined by analyzing technical systems in a vacuum, this tutorial directly grappled with the social and political context of pre-trial risk assessments. The stories told by Terrence Wilkerson about his experience being twice falsely accused of robbery highlighted the trauma imposed by the criminal justice system. As Lum put it, we cannot claim an algorithm is “fair” if we do not understand the consequences of predictions on the lives of the people about whom recommendations are made.

This raises a key point that far too many in the FAT community overlook: mathematical specifications of fairness do not guarantee socially just outcomes. As algorithms are increasingly deployed in sociotechnical environments, they cannot be fully defined in terms of technical specifications. Instead, algorithms are political artifacts that need to be critiqued and developed as such. This leaves the world of computer and data science at a crucial turning point: we need a new approach that does not confine itself to the narrow bounds of superficial, technical neutrality. Our sociotechnical understanding of algorithms must catch up with our technical expertise, providing a framework that relates computation to social justice.

Of course, a language of social justice is foreign to many computer scientists. The curriculum and research are technical ones that purport be neutral and scientific. Computer scientists thus typically lack a language of social justice, and Christian Sandvig rightly notes that a justice framework might turn or scare off many. Instead, he asserts, conflating “fairness” with “justice” may provide the benefit of allowing justice-oriented work to creep in without appearing overly political.

But focusing on fairness and hoping it will cloak justice has its own, more significant flaws. Most obviously, it leads computer scientists down a rabbit hole trying to solve the wrong problem. As fairness has become a hot topic in computer science and researchers look for novel angles, many recent papers detect bias or improve fairness in technical systems that have little ultimate social impact — in other words, fairness has become divorced from an underlying conception of justice. We can spend years striving to ensure that every algorithm satisfies technical notions of fairness, but doing so will not meaningfully impact deeper social issues. As Lily Hu quipped, “people have a moral claim to liberty and a lack of oppression, not optimal Netflix recommendations.”

More importantly, highlighting fairness at the expense of justice reinforces the dangerous presumption that technical systems are politically-neutral tools. While this view is appealing to computer scientists, it denies the well-established fact that algorithms exist in a social, political, and legal context where technical systems can bolster power, alter perceptions, and shroud behavior. Neutrality is itself a political position, and following it typically results (whether intentional or not) in reinforcing the status quo. Thus we see a plethora of computational systems designed to optimize traditional power structures and forms of oppression — predictive policing, recidivism predictions, and worker surveillance — but almost none that question or alter the underlying politics. In this light, it is remarkably irresponsible for computer scientists to wade into complex and contentious social and political environments without truly understanding the tools they wield or the impacts they may have.

This tension between technical and social notions of impact was most clearly on display in “A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions,” which won the conference’s award for Best Technical and Interdisciplinary Contribution. The paper represents an admirable effort to use data science to prevent child abuse and maltreatment, and involved significant reflection on the algorithm’s fairness and ethics. Yet the research seemed not to grapple with the actual impact of Child Services on people’s lives. As Virginia Eubanks describes in Automating Inequality, Alleghany County Child Services has a history of over-policing and criminalizing the poor, reducing their autonomy and ability to provide for their children.

That technical characteristics of an algorithm are insufficient to describe its impact seemed not to register for the research team, however: when asked whether she had worked with communities and families in developing this algorithm (thank you to Kathy Pham for asking this important question), the lead author responded, “The team has spared the community from me, as I am the statistician.” Having worked as a data scientist on several projects with city governments, this response baffled me. The most important task in such a role is to engage with the community and other stakeholders to determine how your work will actually affect them. Data does not often reflect the underlying social reality—the real problem is lurking beneath the surface, only to be revealed through field work. As the bail tutorial so powerfully captured, the human impacts of algorithmic decision-making cannot be captured by data alone.

From my own conversations at the conference, it became clear that there is a critical mass of researchers — spanning computer science, social science, law, and philosophy — eager to engage deeply in the relationship between algorithms and social justice. Much of this work is already underway in places like Data & Society, AI Now, and the Berkman Klein Center for Internet & Society, not to mention Science, Technology, and Society (STS) and similar departments around the world. Given a political moment where inequality is at a 100-year high and right-wing white nationalist organizations are on the rise, and a technological moment where online platforms thrive on pervasive surveillance and a lack of accountability, there exists right now a crucial need for a sustained research agenda that takes a deeply critical approach to technology.

There is much work for computer scientists to contribute to such a movement, some of it already underway. This includes studying the ways that humans interpret and respond to algorithmic systems, working with policymakers to ensure effective use and regulation of algorithms, developing machine learning models that incorporate causality, and evaluating the relationships between machine learning and the law. In addition, there is great potential for developing algorithms that actively fight discrimination and advance the cause of social justice. Over the past several years, machine learning has been deployed to aid social services in preventing gun violence, identify those in need of mental health resources, predict which police officers will be involved in adverse events, and identify bias in police practices and behavior. Such work applies algorithms not to optimize existing systems, but rather to shift the politics within those systems toward social justice.

The FAT* Conference has the potential to be the home for this important movement. It already contains a significant portion of work along these lines and much of the community is eager to engage more explicitly with the social justice implications of sociotechnical systems. Explicitly framing justice — in addition to fairness — as a core component of the community and research agenda will enable FAT to become a truly multi-disciplinary conference that yields a deeper understanding of technology’s social impacts and shapes technology to provide a more just future.

--

--