Shaping AI Systems By Shifting Power

Why we need participatory methods in AI, and what this looks like at Data & Society

Data & Society
Data & Society: Points
7 min readOct 18, 2023

--

By Meg Young, Iretiolu Akinrinade, Ania Calderon, Rigoberto Lara, Eryn Loeb, and Tunika Onnekikami

Art: Gloria Mendoza

New technologies can feel like swimming in a wave pool, where things keep happening to us rather than being instigated by or for us: data is collected without our consent, systems are installed in our workplaces and communities, and services are automated in ways that shape our lives. Increasingly, changes like these arrive with little warning or explanation: Overnight, the public launch of ChatGPT upended the once-straightforward act of assigning homework to students. And the threat of being displaced by generative AI tools now menaces professional artists, contributing to the historic strikes of Hollywood writers and actors. Despite the huge stakes AI tools have for educators, students, artists, and writers, large technology companies did not invite unions or advocates for these professions to shape the ways systems like ChatGPT were designed or deployed.

While this feeling is especially palpable in the last year, it is a feature of a long-standing dynamic: neither technology companies nor governments have strong processes for creating technologies with the people who will be impacted by them. The tech industry typically relies on product design processes that lack avenues for public input, even when having that input would help software developers better understand a problem or refine their approach. When people are consulted by industry, it is most often for product usability research, to share feedback on whether a product is easy to use — not whether it is compatible with their values. And with algorithmic systems, the most impacted people are often not those who directly use the product, as we see with policing tech. This is especially true in the case of AI, where user input is rarely brought into product development until it’s too late to substantively change models or their underlying datasets.

Governments, too, rarely consult with the public before making high-stakes decisions about technology adoption. On city streets, your face or license plate might be recorded by data collection technologies used by law enforcement or transportation agencies. Many government services are being overhauled with the implementation of algorithmic systems that are meant to promote efficiency, but which put new intermediaries between you and your local government. This is also happening in workplaces; depending on your job, your employer might be monitoring your productivity or personal computer. Without the participation of the people who will be affected by these technologies — whether that means soliciting their insight or heeding their refusal — AI is more likely to be based on flawed assumptions that result in harm, like people not getting paid, not receiving benefits, or not having access to equal opportunities.

Participatory methods in AI

Given all this, it makes sense that this moment is crystallizing new interest in public participation in technology design, deployment, and oversight. In the AI field, this interest has accreted to an inflection point — a “participatory turn.” Participatory methods are being applied to multiple aspects of AI development — from experts being asked to label data to co-designing datasets and machine learning models, to eliciting individual input with voting, consultation with civil society organizations, bias and bug bounty programs, red-teaming, focus groups, citizens juries, community organizing, and more. To be sure, this turn provokes as many questions as it answers. Who is considered “the public”? What counts as participation? And for what purpose?

Participatory design was first developed in the 1970s in Scandinavia, where trade unions attempted to work with researchers to shape working conditions inside factories. Another forebear, known as participatory action research, coalesced even earlier in multiple countries as an approach to research that followed impacted communities’ lead in identifying problems, collaborating to solve them, and building local decision-making power. Since then, the vast array of participatory research varies widely in the degree to which people invited to participate have any meaningful say over the final outcome.

This power asymmetry gives many people pause. Essential reading by policy analyst Sherry Arnstein warns us that not all participatory methods confer meaningful decision-making power, and that “participation” processes are all too likely to be overdetermined by those with the most power in a given engagement. More recently, work by Alex Ahmed warns that participation processes can defuse dissent and organizing by offering symbolic — rather than actual — power. Mona Sloane et al. offer a framework for thinking about different types of participation: some forms are extractive, in that they rely on participants’ labor to improve existing AI systems; other forms are more consultative, but are too often not longitudinal or well-designed enough to meaningfully empower participants. They argue that for participatory methods to constitute a meaningful step toward justice, they must emphasize long-term relationships and community power over decision-making.

Both Ahmed and Sloane et al. caution against “participation washing,” warning that it’s possible for participation to be nothing more than a veneer on the harmful conditions that preceded it. In contending with this body of work, many researchers in participatory AI seek to apply its lessons to practice. Yet timescales, resource constraints, and power dynamics present challenges to doing so in practice.

Participatory methods continue to hold promise in AI research, development, and governance because they can offer tools for anticipating, identifying, and averting harm. As systems are developed and deployed by disproportionately white, male, elite teams — often with limited awareness of or regard for their potential harms — working with an array of experiential experts who are closest to the contexts in which systems will be deployed can help identify those harms in system development or diagnose them where harm has occurred. For example, work on community-driven data labeling at the Invisible Institute helped to repurpose poorly-structured police violence records into a resource for institutional accountability. Some forms of participation can also improve community self-determination in the systems used by and for them, such as asking communities to formulate the problems systems set out to address. This is especially important with respect to high stakes decisions — that is, those related to a person’s civil liberties, health, employment, housing, or basic needs. Notably, AI systems are most likely to be deployed in marginalized communities, where they not only cause harm when they fail, but can also cause harm when they work as intended, such as to rationalize services.

Our own participatory turn

At Data & Society, the value commitments that animate this conversation are also propelling our own work forward. For example, our recently launched Algorithmic Impact Methods Lab (AIMLab) is founded on the idea that any evaluation of AI systems should be driven by the real-world questions and concerns of people who will be affected. We have committed to incorporating participation into our research from the earliest stages of planning; we are also devising and assessing methods for algorithmic governance that broaden participation to include a diverse set of impacted groups, and increasing engagement with technology and civil rights advocacy organizations to ensure that affected groups are directly shaping tech policy.

We also see this happening inside the culture of our organization. Activist Adrienne Maree Brown considers such internal change to be necessary for the work, and akin to a fractal: “as we are at the small scale” — interpersonally, inside the organization — ”we are at the large scale” in our relationships with colleagues, peers, network, and the huge array of advocacy groups, grassroots organizations, firms, and others in the ecosystem working on these problems. Our commitment to participation, too, must be fractal. After ten years of holding independence as a core value, we have been increasingly recognizing our interdependence with those we consult and collaborate with in our work: across background conversations, formal interviews, community and peer review, and listening sessions, as well as with the community of people who co-host, facilitate, and attend our events. Strengthening just practices within our organization is essential to grounding our work with those outside it, and we acknowledge that people must feel valued and able to see themselves in the work we produce together. Balancing this desire with the funding, time, and other complexities and limitations can be difficult, but we believe establishing meaningful relationships rooted in trust and reciprocity is worth the effort.

Launching our participation exploration

With this in mind, as we approach Data & Society’s tenth anniversary in 2024, we are kicking off a slate of programming on public participation — what it is, why to do it, what can go wrong, and how to do better. Through these programs, we hope to understand the state of the field in this moment of evolution and learning, to share successes and challenges, and to foster our relationships with peer organizations as well as our peers’ work with others. Here’s just some of what we have planned over the next several months:

  • On October 19, legal scholar and D&S affiliate Michele Gilman will join us to discuss her latest policy brief, Democratizing AI: Principles for Meaningful Public Participation with computer scientist Harini Suresh and human rights lawyer Richard Wingfield. Learn more and RSVP.
  • A soon-to-be-announced workshop on trustworthy infrastructures will ask: Who has the power to decide what is considered trustworthy, and for whom?
  • A workshop on participatory methods in AI will offer opportunities for those working in this space to reflect and share lessons.

Stay tuned for more, and please get in touch with us at participation@datasociety.net if you are doing work in this space!

--

--

Data & Society: Points
Data & Society: Points

Published in Data & Society: Points

Points is the blog of Data & Society Research Institute

Data & Society
Data & Society

Written by Data & Society

An independent nonprofit research institute that advances public understanding of the social implications of data-centric technologies and automation.

No responses yet