Slowing Down, Asking Questions, Looking Ahead

How Mozilla Will Drive Change on Internet Health in 2019 and Beyond

By Ashley Boyd

Online life moves quickly. Even after just a short time away from my screen, my inbox is filled with new emails, and a flurry of tweets, texts and instant messages have appeared. We expect responses and decisions nearly instantaneously. (And hey, Google, your ‘smart reply’ AI for email with its short, substance-free responses isn’t helping.)

Slowing down to think carefully about ideas, options and decisions feels radical. And yet, careful deliberation is precisely what is needed to improve our lives online. The philosophy of “move fast and break things” is breaking us.

To do our part to repair the internet, we at Mozilla have spent several months asking ourselves and others, “What concrete improvements to the health of the internet do we want to make over the next 3–5 years?

The process has been refreshingly deliberate and detailed. In September, we started to look at four areas where we could focus our internet health work: the ad economy; respect online; digital bodies; and better machine decision making. The aim was to pick a goal that would be high impact that we could bet up to 60% of our program resources on over the coming years. And, a goal that we could work on with others across the movement of organizations working on internet health.

We talked to over 50 experts as well as dozens of people at MozFest to ask how and why we might focus our efforts on one of these topics. We had five criteria in mind as we had these conversations:

  • Ambition: importance and scale
  • Winnability: likelihood of success
  • Momentum: some of our allies are already pursuing this goal
  • Resonance: current and potential resonance with the public
  • Fit: leverages Mozilla’s brand, expertise and programs

As we did our analysis, we went back and forth in extensive debate, with different options moving in and out of our ‘favourite position’.

Our Path Forward

After this extensive and open process, and with unanimous support from our board, the Mozilla Foundation has decided to focus on better machine decision making as a place to put significant program resources over the next few years.

We’ll seek to increase public awareness about when and how machines are making decisions and identify ways to fix mistakes that machines make. In the last week alone, we’ve seen several cases where algorithms have been scrapped when their biased outcomes were uncovered. After a Washington Post front page article detailed how the Predictim service uses an algorithm to generate a ‘risk rating’ for babysitters, Twitter blocked the company’s access to the platform’s data. In another case this week, Google announced it would stop using gendered pronouns in its email AI tool (the same one I complained about above). The change came after a Google researcher discovered the algorithm automatically identified an ‘investor’ as male.

As these examples illustrate, there are an unending number of opportunities for raising awareness about machine decision making, improving the outcomes and creating accountability. Our approach assumes that the use of AI will dramatically increase worldwide and that machine decision making can be used to benefit people and society. However, we also believe that an ethical approach to AI won’t happen spontaneously. We’ll need an ethical alternative; a vocal and opinionated public demanding accountability in this space; and guidelines to set norms about when and how machine decisions are made and for whose benefit.

We know that there are many organizations already making progress in AI and machine decision making. And we heard from experts around the world that Mozilla’s expertise, perspective and tools are still needed. In talking with nearly 50 experts during this process, we heard some powerful reasons for Mozilla to tackle this work, including:

  • Most people don’t know how often machines make decisions for them, and worse, don’t know that sometimes those machines can make mistakes or make decisions that harm us. This is a big issue and is only going to get more prominent in the coming years. It’s something we can tackle through convenings, fellowships, education and campaigns.
  • There are a lot of groups in this space already — AI Now, The Partnership on AI and Algorithm Watch to name a few. We either already work with these groups or are talking about how we can start to. But there are significant gaps too — and many opportunities to form new alliances.
  • We heard a lot about how this goal was forward looking and a place where groups would be excited to work with us more.

The moment I had an inkling that this might be the right path came when a leading AI researcher and funder shared, “Everyone is talking about it now — that automated decisions should serve the public interest. How you actually get this done is another question. How do you hold companies to account for that and ensure they aren’t just green washing? This is a nice sweet spot in terms of automated decisions.”

Mozilla has already invested heavily in this area, work that we can build upon and deepen going forward. Everything from the Responsible CS Challenge, to our fellows and campaigns focused on the Internet of Things, to our Creative Media Awards, to our Holiday Buyer’s Guide touch on machine decision making.

And we have additional choices to make as our investments increase. As one internet health leader offered, “There are already initiatives here but that doesn’t feel threatening, it feels complementary. You need to define your space and make sure it’s complementary.”

The Roads Not Taken

Our decision was not easy. The other leading options were each exciting, ambitious and appealing in their own ways. Personally, I changed my mind about my ‘top choice’ no fewer than 10 times, influenced by the news of the day or an exciting interview with an expert. However, we did a thorough analysis of all four leading options and ultimately the ‘right fit’ issue became clear when we reviewed the choices against our criteria.

Here’s a snapshot of the other options and reasons why we didn’t pick them:

  • Online Ad Economy: This was the leading favorite for many when we began, but as we had more and more conversations, this felt like the fight of yesterday instead of the fight of tomorrow. More importantly, many people we interviewed pointed out: we can’t make a major shift on this issue without having a real alternative to Google and Facebook’s ad platforms. MoFo’s movement building work is not likely to drive this particular change over the next few years. There is great work being done here by Mozilla fellows and others, and we are committed to supporting this work with smaller, targeted investments.
  • Respect Online: Digging into the question of how to confront harassment and create safer spaces online, it felt like the most likely places for intervention were on the design and tooling front, as well as looking at ways to improve measurement in the space. We could tackle these things, but they are narrow in scope and don’t fit with most of the work we are already doing today. However, it was clear from all the interviews that there is significant interest and opportunity in tackling gender issues through our work. We will integrate thinking around gender — and other aspects of inclusion — into the design of the machine decision making goal. The work here seems crucial and the path feels clearer than before, but this feels like core work that we should tackle as part of the impact goal as opposed to treating it separately.
  • Digital Bodies: We heard that this issue was up and coming and exciting, but the surface area for impact seemed smaller than we anticipated. Work here could drive progress in terms of privacy, but it wouldn’t get us to many of the power and ethics issues that are clearly emerging around AI. Also, it was clear from the interviews that this is not an area where our existing fellows or campaigns are focused — we would need to take on this work from a standing start. Finally, there are other people already leading in this space — we can support them through our long tail internet health work.

I left this process convinced that each of these topics are timely, ambitious and critical in their own ways. By investigating them deeply with our partners and allies, I understand better how Mozilla and the larger movement can have positive impact in these areas, even at smaller investment levels.

What’s Next

This decision is an exciting milestone and starts a new cycle of questions and discovery. I’m equal parts excited and intimidated by the challenge before us.

Here are some important questions we’ll pursue immediately with staff, fellows and partners:

  • What do we want to pursue in the short, medium and long-term?
  • Who else is working on this? How can we support, collaborate and learn from them?
  • How do we use our thought leadership, fellowships and campaigns to go after these pursuits?
  • How do we plan and figure out how to approach this issue outside of North America and Europe?
  • What’s the best way to measure our progress against this goal? How does this map to our Theory of Change?

We’ll be providing answers to these questions and others to our board in early 2019 as we prepare a budget that reflects our decision to dedicate 50–60% of our resources on the goal of ‘improving machine decision making.’

Going forward, I will continue to provide updates about our work and reflections on the process. I am excited to deepen our alliances around ‘improving machine decision making,’ as well as supporting other organizations in finding their own pathways to impact.

We and our organizations deserve the experience and the focus that comes from thinking deeply and (relatively) slowly.