The Social Life of Algorithmic Harms

This series of essays seeks to expand our vocabulary of algorithmic harms — and with it, our capacity to defend ourselves against them.

Data & Society
Data & Society: Points
4 min readFeb 15, 2023

--

By Jacob Metcalf, Emanuel Moss, and Ranjit Singh

Image: Gloria Mendoza

With artificial intelligence — computational systems that rely on powerful algorithms and vast, interconnected datasets — promising to affect every aspect of our lives, its governance ought to cast an equally wide net. Yet our vocabulary of algorithmic harms is limited to a relatively small proportion of the ways in which these systems negatively impact the world, individuals, communities, societies, and ecosystems: surveillance, bias, and opacity have become watchwords for the harms we anticipate that effective AI governance will protect us from. By extension, we have only a modest set of potential interventions to address these harms. Our capacity to defend ourselves against algorithmic harms is constrained by our collective ability to articulate what they look and feel like.

To expand our vocabulary of algorithmic harms, in 2022 Data & Society convened a workshop that asked researchers and advocates from around the world to consider novel algorithmic harms that are underappreciated by current approaches to AI governance, as well as methods that are emerging to better understand, evaluate, and assess these harms. Rather than start from the problems for which developers can most readily identify technical solutions — like privacy and unfairness, which have received the lion’s share of attention in the world of AI governance — this workshop began by looking at the social life of algorithmic harms.

In mapping the social life of algorithmic harms, workshop participants challenged the notion that algorithmic harms are merely technical problems in need of technical solutions. In his introduction to The Social Life of Things, Arjun Appadurai invites anthropologists to “follow the things themselves for their meanings are inscribed in their forms, their uses, their trajectories… [that manifest in] human transactions and calculations that enliven things.” Along similar lines, the workshop’s participants began by following algorithmic systems themselves to see how their meanings manifest in social exchanges and lived consequences. Among the myriad of such meanings, they identified poignant instances that offered a new window into algorithmic harms that have gone largely unnoticed by technologists and other AI professionals — harms inflicted at the intersection of medicine and technology, through applications in child protective services, and as impacts on the natural environment.

Now, we are pleased to present a collection of short essays written by the workshop participants, highlighting novel forms of algorithmic harms and how their implications change as they move through social exchanges. In some of these essays, participants reflect on their own deeply personal experiences with algorithmic systems. These essays highlight that the work of deepening our vocabulary to articulate algorithmic harms often begins with the personal and moves into the social. But being able to talk about harms is only the beginning. This series will also offer insights into developing methods for assessing and measuring these harms.

We hope that the series will serve as a repository for those who wish to assess algorithmic systems for potential harms, as a model for those motivated to contribute their own experiences with algorithmic harm to our expanded understanding — and as a demonstration that the work of our contributors in mapping the social life of algorithmic systems and their harms is crucial in the journey toward governing AI in the public interest.

Explore the series

The language of harms and wrongs helps us understand what constitutes algorithmic harm, and how to address it.

By reinforcing high-carbon practices, algorithmic information systems contribute to climate change.

The AI system that is keeping me alive is ruining my life.

Bridging critical gerontology and data and information scholarship is a win-win.

Algorithms have added more barriers to the evidence-based decision-making children in the care of these agencies so badly need.

These tools illustrate the kinds of insidious algorithmic harms that rarely make headlines.

Without meaningful transparency, enforcement of any civil or consumer rights is nearly impossible.

Reflections on terms of service, gender equity, and chatbots.

--

--

Data & Society
Data & Society: Points

An independent nonprofit research institute that advances public understanding of the social implications of data-centric technologies and automation.