Q&A: Sarah Gold on new models of collective consent

People + AI Research @ Google
People + AI Research
7 min readAug 17, 2020
Illustrated portrait of Sarah Gold in a brushy style and shades of green
Sarah Gold, illustrated by Shannon May for Google

This is the first in a series of Q&As with design and UX practitioners sharing perspectives on participatory machine learning.

Sarah Gold is a designer, founder and CEO of Projects by IF (IF), a technology studio that specializes in practical and ethical uses of data and AI. In March this year IF launched the manifesto for Society-Centered Design, a new methodology to put society, not individuals, at the center of products and services. Sarah spoke with PAIR lead Jess Holbrook about the ideas in the manifesto. This post has been collaboratively edited with former PAIR writer-in-residence David Weinberger.

Jess: What led you to the idea of collective models of consent?

Sarah: At Projects by IF we work with organizations that hold, or are about to hold, lots of data. It’s clear to us that more public value could be derived from commercially held data. But right now that’s hard to do. A big blocker is the way consent works, or rather, doesn’t work. It’s individualized, opaque and tends to be done to people rather than with them.

This is part and parcel of how individualized the world around us, including our technology, has become. Individualism has been the major design principle since the 1970s. It’s even seeped into our data protection frameworks. For example, the European Union’s GDPR [General Data Protection Regulation] provides individualistic rights around data. But data rarely represents just one person.

Jess: Where in technology do you see this individualism expressed?

Sarah: To begin with, the purpose of a company is usually to create revenue, and a great way to make money is to make products and services that individuals want to use. We design amazing products for individuals, whether it’s the mini super-computers in our pockets or the knowledge that you can press a button to order a taxi that will come directly to where you are. Those experiences are fantastic for some individuals. They give some people superpowers. But only focusing on individual needs we might end up focusing on the needs of certain groups while missing the bigger social perspective. It’s also exclusionary to groups who are deemed to have less purchasing power.

Jess: So technology’s focus on the individual goes deeper than just personalization …

Sarah: Personalization is where the focus on individuals has taken us and that may not necessarily be a bad thing. But with society-centered design we’re calling for awareness of the bigger picture, and a more equitable approach. We’re asking for a new value system, one that proactively dismantles existing biases, that’s equitable and demonstrates care for people.

The opportunity and need is to think more about the social context and the social effects of the technology we’re building and using. The methodologies behind the personalized products we use and enjoy — human centered design, jobs to be done, or design thinking — have encouraged teams to focus on user needs, which tend to be customer needs. Who’s left out? Who are we not thinking about?

Jess: And design centered on society requires a different idea of consent, right?

Sarah: Yes. Data often represents many people. For example, your DNA represents not just you, but also your family members.

Jess: Great example. There’s nothing more personal than your DNA. And in the world of tech?

Sarah: If you’re a parent, your location data might not be just your location but also your kids’. After all, we’re not fundamentally individuals. We live through relationships with others. So, it shouldn’t surprise us if data about ourselves often represents others too.

Here’s another example: smart thermostats. It’s great that they encourage us to live in greener ways; gathering information that may drive behavior change in households in this case seems like a good thing.

Jess: But …

Sarah:…one person is often responsible for managing the thermostat, energy data, and bills, but the energy data represents the activities of a whole household. It’s conceivable that one person can see the activities of everyone else. Just think about the issues that could cause someone who’s in an abusive relationship. Their abuser could see when they’re in the house or what they’re doing.

Jess: Not to mention trying to untangle your social networking data so that it doesn’t include any information about anyone else.

Sarah: Yes. The idea that privacy can be individualized is a bit of a myth. Privacy is what I decide other people aren’t allowed to know about me, so you need other people for even the concept of privacy to make any sense.

And even the individualistic ways in which we currently manage data are quite difficult. Terms and conditions are inadequate ways for individuals to understand how data that represents them will be used. The way that we’ve designed consent right now is more about theater than it is about agency.

Jess: And that’s where collective models of consent come in?

Sarah: Yes. Even when data is collected with individual consent, its value is how it can be used for public value. Collective consent is a way to enable greater participation by the communities that are affected by those decisions based on the combined data that’s been collected about them. We think it makes sense for teams working on AI too. Individual models of consent that ask for just-in-time consent aren’t adequate for teams developing machine learning models. Collective consent offers a way to continually innovate with machine learning models and be confident your customers and communities approve of what you’re doing.

In health care they talk about “social licenses” entered into with a community that clearly state what data can be collected, and what it can be used for and why.

This makes the giving of consent a much more deliberative process. Consent would no longer be about an individual facing an “I agree to the terms and conditions” button, but would instead be a conversation with your values, what’s important to you, and what benefit you and others will get from the collection of that info in that way. It brings voices of the community into that process.

Jess: What might the mechanism be for coming to this collective consent?

Sarah: Sometimes an elected official might make that decision. But there’s a huge space between elected officials and the company creating the software — a huge space for innovation. We need to be thinking about more democratic ways to garner consent, and ways to listen to communities too rarely heard. We need to take a more agile approach to what could work.

Jess: For example…

Sarah: Participation up until now has largely been about round-table discussions. For example, citizen juries. Citizen juries are a tool for engaging citizens on different issues. Typically they involve randomly selected citizens coming together over a number of days to discuss and come to consensus on an issue. In other situations, as an example, consent may require a religious leader to agree that this is a route they should take; permission from the imam might be really important. These participation models need to be culturally competent.

Cultural competence is really important, because you’re more likely to get a diverse, representative cohort if you design with that in mind. Representative cohorts are crucial to design out racism, sexism and ableism from technology.

At IF we’ve been asking what it would look like to have a non-profit group to serve as an intermediary between communities and the organizations that are using their data. The nonprofit might receive a privacy-preserving summary of data from a commercial organization. It could share that summary with the collective which then comments on that data. That could happen through in-app features or in-person activities such as citizen juries that make decisions about how that data can be used.

These are all nascent ideas and need testing.

Jess: And you’ve called these “bespoke” models of collective consent because they might be specific to a community or a data-based project?

Sarah: Exactly. I believe we need to explore using technology to enable a differential style of consent. Where consent is tailored to the communities it represents and the data being permissioned.

Jess: Digitization is making obvious the number of data contracts we’re in that we’re not even aware of. It’s a really challenging management problem, not only because of the volume of these contracts, but because they are often at different levels of collectivity, from the truly individual to the massively collective. It feels like there’s an accelerating need for navigating all those layers.

Sarah: There’s an urgent need to address this. We know how to design products that gather the consent of many individuals. It becomes far more challenging when you’re looking at how you can understand data about you that gets aggregated with the data of other people, especially when at scale, that data that can provide insights — and possibilities of control — that otherwise wouldn’t be possible. Making decisions at that level is a gray space.

And it can be highly political. Many of these data sets are held by private companies. There are important questions we should be asking ourselves about how we should be rationalizing the types of data we need. As more of our experiences become digitized this line gets blurrier. Where does your agency as a person actually lie? How does the agency that democracy grants you work in the digital space, especially when we know much of the data is held by private corporations? How do we hold private corporations to account?

Jess: Great questions? And the answers are …?

Sarah: Well, we’re working on it. Truthfully, there need to be many answers to this problem. Other work I’d recommend reading is by Anouk Ruhaak, Sean McDonald and Inioluwa Deborah Raji. Watch this space!

Opinions in PAIR Q&As are those of the interviewees, and not necessarily those of Google. In the spirit of participatory ML research, we seek to share a variety of points of view on the topic.

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.