Trustworthy Infrastructures: Exploring Emerging Approaches to Building Trust Online

What might be possible if trust was not merely a value claimed by powerful institutions, but understood and explored as a risky, imperfect practice?

Sareeta
Data & Society: Points
5 min readOct 19, 2022

--

By Sareeta Amrute

Trust is not a one-size-fits-all concept. It is constructed and produced in ways that serve some communities, and exclude others. Many of our current technological systems are designed in ways that inspire mistrust. When it comes to social media companies, for example, ad-driven business models, inadequate policies, and algorithms optimized for engagement combine with social and political dynamics to worsen trust in institutions.

The ways people come to trust one another, trust in numbers, trust data, and trust in democratic processes are shaped by both social and technical systems. In other words, trust is a sociotechnical phenomenon. How are trust and safety encoded in our sociotechnical systems? And who has the power to decide? As data-driven and algorithmic systems continue to be essential, if often unseen, underpinnings of our social, economic, and political lives, why is so much effort being put into producing untrustworthy technical infrastructures? And how have minoritized communities produced their own forms of contingent trust?

Technology companies have created “Trust and Safety” teams and “trusted flagger” programs to compliment their security processes, hoping to convince users and non-users alike that they can be trusted, and, in turn, that the infrastructure they’ve developed can be relied upon to enable trusted relationships with others. But it’s clear that we cannot rely solely on private technology companies to define trust and safety. While global platforms favor universal solutions, we believe that making more trustworthy systems requires the expertise of those whom existing systems have failed, now and historically.

In order to create trustworthy infrastructures, we believe it’s critical to approach this work from the perspective of marginalized communities that are disproportionately harmed by the status quo. That’s what our new Trustworthy Infrastructures program will do. Over the past half decade, many skilled and thoughtful research groups and individuals have documented what online spaces look and feel like when they are not trustworthy: mis- and disinformation spreads; marginalized communities are targeted with abuse and harassment; privacy is compromised; data is insecure. But as a research field, we know far less about how communities create economic, social, and political life in these untrustworthy spaces. What might be possible if trust was not merely a value claimed by powerful institutions, but understood and explored as a risky, imperfect practice?

Under this new program we seek to gain a clearer picture of emerging approaches to building trust online, and the possibilities they set in motion. The program will center the work of people of color and members of vulnerable communities in producing new empirical research to shape the development of trustworthy infrastructures and the policies and regulations that govern them. In this, we build on Data & Society’s early work on mis- and disinformation (including our six-year program on Media Manipulation and Disinformation and our experimental, award-winning Disinformation Action Lab), and on a body of work by scholars including Chaz Arnett, Ruha Benjamin, danah boyd, Anita Say Chan, Sasha Costanza-Chock, Meredith Clark, Joan Donovan, Marissa Duarte, Alice Marwick, Tamara Nopper, Jonathan Corpus Ong, Murali Shanmugavelan, Gabriella Spears-Rico, Catherine Knight Steele, Kim TallBear, and Ethan Zuckerman. We are also guided by the critical work of organizations including Data4BlackLives, the Distributed AI Research Institute, Tierra Común, Baraza Media Lab, and Khabar Lahariya.

We will ask: What are minoritized communities doing to uphold their own structures of communication? For example, what can be learned from the experience of building cultures of trust in Black, Indigenous, and Latinx communities in the US around Covid vaccination? What do digital communications infrastructures look like from the perspective of those who have been relegated to the “waiting room of history,” like those — including women, gender non-conforming people, caste-oppressed groups, Indigenous communities, and communities of color — who have been repeatedly told that they are not qualified to be full participants in decision-making? How are they repurposing these infrastructures, including communications, data collection, and automated decision-making, to serve their own needs? How does trust get established within digital infrastructures, and how do social media platforms themselves contribute to or undermine trustworthiness? For example, how might we draw upon restorative justice practices and diverse practices of anti-caste activism to redress online harms and start to build back trust? What might these platforms look like if those most disproportionately harmed shaped their design? What would systems of redress and accountability look like?

The answers to these questions have profound implications for democratic participation. Grounded in Black feminist, Indigenous, and anti-caste and anti-colonial theory (and recognizing communities of color as pioneers in creating safer spaces online), our research will advance current thinking on democratic participation, on the one hand, and disinformation, on the other.

Under this program, our postdoctoral scholar Tiara Roxanne is exploring how trust in infrastructures is built in Indigenous communities. Senior researcher Robyn Caplan is considering how platform companies are using “verification” as a way to differentiate between the trustworthiness or “authenticity” of users, and the goods available over their networks. Research analyst Joan Mukogosi is examining how Black experts and communities navigate anti-Black information landscapes in order to advocate for their health and wellbeing. In an effort to build community-led approaches to safety online, my own research seeks to learn from activists in the Indian diaspora, and explores how cybersecurity can be co-construed through community tools and collective assessments of risk and safety.

A trustworthy infrastructure doesn’t come in a box, and it’s not something any one company can provide or solve. Establishing trust is a process, one that’s driven by people. We aim to produce first-hand, empirical knowledge about how non-dominant groups are building information infrastructures, sometimes within and alongside existing platforms. We will use that research to draw attention to the kinds of solutions communities actually need tech companies and policymakers to enact.

--

--