Up Next For FAT*: From Ethical Values To Ethical Practices

Roel Dobbe
4 min readFeb 9, 2019

--

A tutorial translating concepts from from value-sensitive design and science and technology studies

By Roel Dobbe (@roeldobbe) and Morgan Ames (@morgangames).

Figure borrowed from brodzinski.com

The second Conference on Fairness, Accountability and Transparency in Socio-Technical Systems, held in Atlanta last week, celebrated a growing community of scholars and professionals researching, developing or otherwise engaged in the integration of computing in societal systems, services and high-stakes decision-making.

Keynote speaker Deirdre Mulligan opened day 2 with a wake-up call. She pointed out that inattention to democratic values tends to be the norm in the procurement and development of algorithms for the public sector. The many modeling choices and assumptions that go into building computational systems have real-world consequences. Mulligan argued that we should be deeply concerned that these human value judgments tend to not be reported, hidden away behind trade secrecy agreements, and shielded from any form of meaningful democratic deliberation.

With her call to attention, Mulligan underlined a concern experienced more broadly among the FAT* audience. Inattention to values and sociopolitical context was a critique also heard about the work presented at the conference itself. Although some efforts were well grounded in practical context, there was a growing awareness that technologists tend to make assumptions about histories and contexts they are not familiar with, without including those who had relevant experience and expertise, or having affinity with the constraints of policy and politics.

Some of the presented papers made suggestions about how to make value choices and assumptions more explicit in building systems. The paper on Model Cards for Model Reporting by Google provides a template for model builders to report the values used to make modeling choices. In his translation tutorial, Jacob Metcalf defined “(un)fairness” as human judgment and “bias” as a statistical property to argue that the former cannot be resolved by fixing the latter alone: “Resolving unfairness requires engaging with questions of values.”

In the first session on “Framing and Abstraction,” Suresh Venkatasubramanian argued how the desire to abstract away the social context of fairness problems in machine learning can lead to various traps, “rendering technical interventions ineffective, inaccurate, and sometimes dangerously misguided when they enter the societal context that surrounds decision-making systems.” Lastly, an historical account by Ben Hutchinson showed how fairness metrics developed in the 60s and 70s failed to have a lasting meaningful impact, urging the community to connect future work to “human values”.

Despite these positive contributions, the dominant narrative in most conversations still tends to frame computer scientists, and tech professionals more broadly, as the locus of ethical agency, and ultimately those who should be making the call about how to build systems that are fair, accountable or transparent. Why should a group of professionals with this specific set of trainings be in the position of making these decisions to begin with? And how much agency do tech professionals really have if decisions about policies or contracts are made in the boardroom or sales department? What set of values is at the root of these narratives and the political reality we operate in?

To start unpacking these questions of values and sociopolitical context, we publish our translation tutorial on “Values, Engagement and Reflection in Automated Decision Systems”, which we held at the first day of the conference. Published as a slide deck with detailed notes, we cover three areas.

In the first section, we motivate why the community, and more broadly the engineering and computer science disciplines, should care about grounding their work in social and political context. The second part, introduces methodology from the area of value-sensitive design as a starting point for engaging with a broad set of stakeholders affected or otherwise engaged in the design of a system. In the third section, we reckon with the inherent limitations of value-sensitive design as a panacea for all ethics and value issues. In order to develop literacy and fluency in situated value understandings, we introduce a selection of works from science and technology studies, a field that has long focused on these challenges.

This tutorial is by no means exhaustive, and is part of an ongoing effort to translate across disciplines. We welcome your comments, questions and suggestions for pushing this forward and sharing this with a broad audience!

Access the tutorial here.

This work was supported by the Center for Technology, Society & Policy at UC Berkeley. Roel Dobbe is a Postdoctoral Researcher at the AI Now Institute at NYU. Morgan Ames is the Interim Associate Director of Research at the Center for Science, Technology, Medicine & Society and a Lecturer at UC Berkeley.

--

--

Roel Dobbe

Assistant Professor in Technology, Policy and Management at Delft University of Technology. Safety, sustainability and democracy challenges in AI Systems.