Organizers: Kristen Vaccaro, Karrie Karahalios, Tad Hirsch, Deirdre Mulligan, and Daniel Kluttz
When Facebook decided to suspend Lance Brown Eye’s account, he found he could open an account as “Lance Brown,” but not with his real name.
“They had no issue with me changing my name to a white man’s name, but harassed me and others, forcing us to prove our identity while other people kept whatever they had”
Although Native American users could often re-open accounts by showing government identification, Facebook kept making decisions the same way.
It was only through a campaign of media attention and significant pressure that Native Americans were able to influence Facebook to change how it implemented its real name policy.
These kinds of systematic problems in decision making can exist in many algorithmic systems, online and off.
To help design systems that can address these issues, we organized a workshop on designing for contestability in algorithmic systems.
Read on to see what came out of the workshop; and all the way to the end to see the workshop’s mini film festival of Contestability in Pop Culture!
The workshop was held at the ACM Computer Supported Cooperative Work and Social Computing (CSCW) conference in November 2019. Participants — with backgrounds including law, computer science, and art and design — came from as far as KAIST in South Korea to the Delft University of Technology in the Netherlands to Austin to take part in the conversation.
The workshop comprised of brief lectures, open-ended discussions, as well as a number of group activities. Out of these activities, a number of findings, and even more questions, emerged.
1. Identifying Contestability in Current Systems
One goal for the workshop was to identify where contestability might play an important role. Participants shared a wide set of ideas, including where and how they thought contestability might contribute.
This sample shows that contestability can play an important role in a huge range of systems — from constantly updating online systems (like facial recognition) to more stable systems (like healthcare predictions) to physical infrastructure (like urban design).
Recent work found that an existing, implemented system showed racial bias in healthcare outcomes, where it offered additional support services for patients who were predicted to have worse outcomes. The discussion highlighted that for an end user, this system either looked like a nice offer of more health care or nothing at all. The people who did not get an offer don’t know what they’re missing. So part of designing for contestability may involve revealing the existence of the algorithmic system in the first place.
Charging stations for electric vehicles are a current challenge in urban design. Charging is not uniformly distributed: some neighborhoods have many more electric cars, and cars are often charged at similar times. So some are experimenting with algorithmic systems to allocate power to vehicles. But these systems do not treat all cars the same all the time: everything from battery capacity to whether the sun is out to whether the driver participates in carpooling could play a role. Ensuring that everyone understands the system poses important challenges for transparency and consensus building.
Facial recognition systems have recently come under intense scrutiny. They make errors, propagate social biases, and constrain complex human identities — like ethnicity and gender — into simple schemas. Just as important, they often collect and use data in ways that preclude user knowledge and consent. While there are opportunities for users to shape the decision making at many points, from data annotation to user-facing deployment, these require a conscious engagement and willingness to embrace performance drops.
2. Connecting Contestability to Values
One of the aspects that emerged out of the workshop was making the connections to as well as distinguishing contestability from other important values in design that have been highlighted in recent years, like trust, transparency, accountability, and explainability.
- Users need to understand decision making systems before they can shape and influence them — connecting to work on explainability.
- Users feeling that their voice has been heard in decision making systems can help develop trust.
- Transparency can help users understand enough to contest decision making processes, but as others have argued before, transparency can also be used to confuse, obscure or even endanger other users.
The workshop also highlighted the importance of making embedded values explicit in designing for contestability. As people seek to influence and shape decision making systems, it is important for them to understand why the system was designed as it was.
For example, in designing social media platforms, values that are designed into the system might include equality, free expression but also maximizing attention. Being explicit about these goals may make some ways of changing decision making easier and others (which conflict with those values) much more difficult.
3. Identifying Types of Contestation
Finally, discussions identified several types of contestation people might want, or different ways that people might want to weigh in on the decision making process.
- I don’t like how I’m being measured
- I don’t like that I’m being measured
- I don’t agree with this specific prediction
- I don’t agree with the target outcomes
And for those with greater understanding of the decision making process, additional types might include:
- I don’t agree with the data selected for training
- I don’t agree with the features being used
- I don’t agree with the model chosen
More extreme versions may also exist. For example, believing that all training data is biased with multiple — often interconnected — historical, social, political, and cultural factors, and that these factors must be accounted for.
When designing for contestability, deciding which opportunities to provide, and for whom, is important. It is likely to influence how useful, convenient, and productive people’s engagement is.
What should we aim for?
One of the first issues that arose in the workshop is the definition of contestability. In the past, researchers have distinguished contestability from other forms of contestation, like appeals, by arguing it requires building ways for users to shape and influence the system into the system itself.
However, some participants argued that alternative aims are preferable.
Designing for users to be able to change or influence the system will often mean they can game the system.
To avoid this, can contestability be viewed as part of a broader ecosystem rather than a single technology or platform? For example, a system could provide data, allowing third parties to build their own tools without incorporating them into the system itself.
One of the biggest challenges we identified is the question of who participates in contestable systems.
Often the question can boil down to “who has standing” (in a legal setting) or which stakeholder’s voices should be heard. This alone has challenges: how can designers identify all of the stakeholders? Currently, many designers and developers make assumptions that do not capture the full set of stakeholders.
In addition, stakeholders’ goals and values often conflict. How do they design to address very different goals and experiences of these stakeholders? How do they design for stakeholders to establish internal consensus?
But choosing to contest decisions takes time, energy, expertise and an expectation that they will be listened to. So not everyone participates equally.
For example, many users whose social media accounts were suspended chose to open new accounts with “white” last names, rather than continue to try to persuade Facebook to change their approach.
Prior education, socioeconomic status, current job, and many other factors can influence who can and will get involved. For example, parents are more likely to contest test scores and grading in schools if they’re higher income.
Similar issues connected to structural inequalities are likely to arise in contesting decisions in other algorithmic systems; so designing to address these issues is important.
How do we develop community and processes?
One common theme that emerged was that successful contestability practices are likely to involve existing communities and established processes. One example that was shared was that of Wikipedia.
The Spanish Wikipedia community collectively shut down a bot that was designed to undo edits that would ‘damage’ a page. In doing so, they organized a public audit of how the bot was performing and used a consensus-building discussion to make that decision.
Participants did not need to have prior expertise in machine learning evaluation to participate; even without it they could discuss “how, when, why and and under what conditions” the bot should be used.
This consensus-building discussion worked so well because Wikipedia has an existing community of editors and established processes for discussion.
Many other domains do not have these existing communities and processes. So in part, designing for contestability may require designing to create them.
The workshop also included a mini film festival of short clips exploring how contestability has been explored in pop culture.