Happy Hacker Summer Camp Season! A CRASH Project update, from the team at the Algorithmic Justice League

Algorithmic Justice League
9 min readJul 30, 2021

By Joy Buolamwini, Camille François, & Sasha Costanza-Chock​​

An illustration of people reporting, disclosing, and organizing around algorithmic harms; from AJL's CRASH project harms reporting platform mockups. Credits: Vijay Verma, blush.design, and Isaac Durazo, bocoup.com.

It’s a long, hot, hacker summer! Grab a cool beverage and settle in to this short update about what we’re learning from bug bounty platforms about how to fight algorithmic harms.

In July 2020, the Algorithmic Justice League (AJL) formally launched the Algorithmic Vulnerability Bounty Project (AVBP), since renamed the Community Reporting of Algorithmic Harms (CRASH) project. With the support of the Rockefeller, Sloan, and Mozilla Foundations, we set out to assess the applicability of bug bounty programs to algorithmic harms, and we dove deep into security vulnerability disclosure programs in order to see what we could learn about how to develop more equitable and accountable AI systems.

We immersed ourselves in the short but colorful history of bug bounties. We read everything we could find on the topic; interviewed scholars and practitioners; organized a virtual workshop with community-based organizations that serve people who are harmed by algorithmic systems; and analyzed over 100 reports of algorithmic bias and harm that people have shared with the Algorithmic Justice League through our “Bias In the Wild” form. We’re now in the last stages of preparing our findings for publication.

We’ll be releasing our full report this fall, where we will summarize what we’ve learned about the applicability of bug bounties and other vulnerability disclosure mechanisms to new fields (in particular, algorithmic harms).

We’ve also incorporated key lessons into the development of a prototype harms reporting platform that we will soft-launch in the coming months.

If you’d like to be notified when the report and platform launch, head over to crash.ajl.org and click on the big red ‘Sign Me Up’ button. In the meantime, read on for more background about the project and for a teaser of some of our takeaways!

Background

The origins of this project stretch back to the 2019 Bellagio Center Residency Program, where AJL’s founder, Joy Buolamwini (AKA the Poet of Code), met CRASH co-lead Camille François. Joy was building on her previous field-shifting work to document and expose bias and harms in AI systems, and Camille had previously undertaken similar work at Google, where she advocated for the company to expand its bug bounty programs into accepting reports of algorithmic bias. Joy and Camille were struck by parallels between the emerging field of algorithmic harms research and early developments in the history of infosec. They were both eager to explore what mechanisms could be established to create trust between parties, support researchers, and systematize the discovery and exposure of serious and harmful flaws in algorithmic systems.

In order to more fully explore what collaborative and participatory mechanisms for harms discovery would be appropriate, and what could be learned from the diversity of bug bounty programs, Joy and Camille assembled a crack team of researchers to review bug bounty and algorithmic harms literature, interview key practitioners, analyze existing programs, platforms, and harms reports, and convene and learn from organizations who work with those who are most harmed by AI systems. Dr. Sasha Costanza-Chock (at the time a faculty member at MIT, now AJL’s Director of Research & Design) came on board to co-lead this work, enabling us to ground our analysis of current programs and our platform development process in a design justice approach. We brought on two Research Fellows who joined forces across fields, with Deborah Raji bringing her deep knowledge of how algorithmic harms discovery and exposure currently unfolds, and Josh Kenway bringing experience in different approaches and issues related to security vulnerability disclosure. Thus the nucleus of the CRASH project was formed (since then, many others have contributed — too many to list in this short post, see the full credits when the report drops!)

Our Research Questions, In a Nutshell

From the outset of the CRASH project, three key research questions guided our analysis of bug bounties, as well as of mechanisms that we believe to be inherently related, such as vulnerability disclosure programs and penetration testing:

1. How do these programs contribute to community-building?

Bug bounties and related mechanisms have helped foster community amongst security researchers by creating opportunities for collaboration and exploration. We were keen to learn about how we might structure programs to support the development of a truly diverse and inclusive community focused on algorithmic harms, from weekend enthusiasts to the next generation of researchers in this space.

2. How do these programs advance the state of knowledge and the field?

Cybersecurity bug bounties and related mechanisms are also tied to the creation of learning materials, the development of relevant tooling, and the maturation of security practices across organizations that receive vulnerability reports. Recently, a series of fundamental methods papers have been published in the algorithmic harms space. We wanted to identify how bug bounty-like programs might be able to further advance the state of the field, and to make this body of work accessible to more people.

3. How do these programs enhance or impede transparency and accountability?

Questions around transparency and accountability have played a central role in the development of vulnerability disclosure, and still loom large in the bug bounty space today. Understanding known obstacles to transparency and accountability in cybersecurity provides valuable lessons for the algorithmic harms space, where researchers who expose bias and harms have often been met with adversarial reactions from AI system vendors and operators. This question also seems crucial at a moment when regulators around the world have come to realize that market mechanisms alone are unable to yield accountability for perpetrators of socio-technical harm, and are developing and rolling out new controls at many different levels.

Throughout the last year, we learned a lot about various disclosure mechanisms, and we also came to realize that a few key design levers are responsible for shaping the overall structure of bug bounties, vulnerability disclosure programs, and penetration testing.

Key Design Levers

AJL’s overarching goal with the CRASH project is to help build participatory disclosure mechanisms that encourage field building: we want to help foster the creation of an inclusive community of researchers, practitioners, and everyday people who have been harmed by AI systems, who can together take action to help prevent, report, and redress algorithmic harms. Effective disclosure of bias and harm is an important part of the growing movement to demand, and build, more equitable and accountable AI systems. To organize effective disclosure of AI bias and harm, based on our research, we believe it is important to consider the following design levers in light of the tradeoffs that they present and the different objectives they accomplish:

  • Reporting Model: Does a particular program or platform solicit reports only for issues that affect organizations that have signed up to receive reports, or can reports be submitted even for non-participating organizations?
  • Compensation Model: How are contributors compensated for their work and expertise?
  • Disclosure Model: Under what terms and on what timeframe are contributors authorized to disclose their findings publicly (e.g., in blog posts or academic research)?
  • Participation Model: To what extent are mechanisms intended for widespread, public participation versus permitting only a limited number of selected contributors?
  • Program Management: To what extent are the responsibilities of program management handled directly by organizations versus by third-party platforms? For example, who hosts program terms, receives reports, validates submissions, triages reports, and verifies patches pre- and post-release?
  • Program Duration: Are programs intended to be temporary or long-lived?
  • Program Scope: What kinds of infrastructure and issues are covered or targeted under a given program?

Towards socio-technical bounties

Over the past few years, the infosec field had to reckon with an expanding set of socio-technical challenges, with many teams and defenders adapting to tackle issues from data abuse to disinformation operations. We believe that bug bounty programs, and vulnerability disclosure processes in general, will also have to adapt to a wider set of socio-technical issues and encompass non-traditional vulnerabilities, such as those that lead to algorithmic harms.

Many changes are already underway. Our research team looked into the handful of data abuse bounties that emerged in the wake of the Cambridge Analytica scandal, and at promising innovations that hide in the fine print of program policies. In 2018 in the video games field, Rockstar Games established a fascinating precedent for algorithmic harms bounties by adding a reward mechanism as part of their overall bug bounty program for research that could demonstrate “a reproducible incorrect ban in GTA Online or Red Dead Online … to ensure that [its] anti-cheat system does not ban anyone who is playing the game normally and consistently with [their] terms of service.” (Thank you to Alex Rice for pointing us to this one!) Twitter’s recent announcement about an algorithmic bias bounty competition this year at DEFCON is continuing this trend, and we’re encouraged to see the Twitter META team go in this direction.

This is an exciting moment to re-evaluate traditional infosec programs and disclosure mechanisms to meet new socio-technical challenges. On the algorithmic harms front, it is apparent to us that a community is emerging, maturing, and ready to engage in organized disclosure processes. To date, such disclosures have often unfolded on social media, with participatory public audits and harms reports building on one another, sometimes snowballing into campaigns that carefully document bias and harms at scale. For example, starting in October of 2020, Twitter users publicly demonstrated that the platform’s image cropping algorithm was biased against women and people with darker skin, findings that inspired the Twitter META team to conduct and publish their own in-depth bias audit. We’ve also observed this in the growing wave of students, professors, community organizers, and independent researchers pushing back against the biased technology and harmful use of e-proctoring software such as Proctorio.

We expect to see more companies and platforms announce algorithmic bias and harms bounties in the coming year, as well as expand the scope of their programs to broader socio-technical issues. We hope that the lessons from our research will be used across the whole ecosystem, whether by those designing new programs, by those adding to their organization’s existing programs, or by hackers expanding the scope of the systems and issues that they explore.

What’s next: our full report, and a prototype harms reporting platform

Awareness of algorithmic bias and harm is growing, both in the research community and in public conversation — to take one prominent example, the film Coded Bias has recently been trending on Netflix. Yet algorithmic harms continue to proliferate across every domain of life, and most people have no way to report harms and no way to seek redress. We took the findings from our bounties research, synthesized them into a design brief, and used that to inform the development of a prototype platform where people will be able to report their experiences of algorithmic harm in a structured way that we hope will be useful to multiple stakeholders, including harmed individuals, community based organizations, legal teams, journalists, and companies that are developing AI systems, as well as to lawmakers and regulators.

We will also be releasing our research findings regarding current bug bounty programs and disclosure mechanisms, hoping that the design lessons we’ve synthesized may help others who are developing bounty programs and other disclosure mechanisms with an eye towards tackling socio-technical harms.

… And a note of gratitude.

Finally, this post is also to thank the infosec community for great generosity with their time and insights as we sought to learn from what has worked, and what hasn’t, with current systems. While their willingness to share their expertise shouldn’t be taken as an endorsement of this blog or our findings in their totality, we want to unequivocally thank Alex Rice, Amit Elazari Bar On, Dino Dai Zovi, Jack Cable, Katie Moussouris, Lisa Wiswell-Coe, Marcia Hofmann, Mårten Mickos, Rayna Stamboliyska, Ryan Ellis, and Yuan Stevens.

If you have strong feelings about the applicability of bug bounties to other fields, or relevant research you want to ensure we consider in our work, please don’t hesitate to reach out to our research team with your enquiries: contact Camille François, cfrancois@cyber.harvard.edu.

Stay tuned for more, and don’t forget to sign up for notification of the report drop and prototype launch over at crash.ajl.org!

--

--

Algorithmic Justice League

The Algorithmic Justice League is an organization that combines art and research to illuminate the social implications and harms of artificial intelligence.