In April, two of the organizers of the Google Walkout, Meredith Whittaker and Claire Stapleton, came forward with the stories of the retaliation they’ve faced as a result of speaking out at the company. Claire left Google in June—yesterday was Meredith’s last day.
Here’s the note she shared internally:
July 10th was my 13-year Google anniversary, and today is my last day.
My experience at Google shaped who I am and the path I’m on. It’s hard to overstate how grateful I am for the teachers, mentors, and friends along the way, or how surreal this moment is. I still can’t imagine my badge not working.
The reasons I’m leaving aren’t a mystery. I’m committed to the AI Now Institute, to my AI ethics work, and to organizing for an accountable tech industry — and it’s clear Google isn’t a place where I can continue this work.
This has been hard to accept, since this work urgently needs doing. Google is one of the most powerful organizations on the planet; I’ve had the privilege to see it grow from a few thousand committed people to the behemoth it is today.
The company has emerged as a global leader in AI (the result of some combination of strategy, luck, timing, and massive centralized data and compute resources). This has helped propel Google’s entry into “new markets” — healthcare, fossil fuels, city development and governance, transportation, and beyond.
The result is that Google, in the conventional pursuit of quarterly earnings, is gaining significant and largely unchecked power to impact our world (including in profoundly dangerous ways, such as accelerating the extraction of fossil fuels and the deployment of surveillance technology). I’m certain many in leadership — who learned what Google was and why it was great over a decade ago — don’t truly understand the direction in which Google is growing. Nor are they incentivized to.
How this vast power is used — who benefits and who bears the risk — is one of the most urgent social and political (and yes, technical) questions of our time. And we have a lot of work to do. The AI field is overwhelmingly white and male, and as the Walkout highlighted, there are systems in place that are keeping it that way. This, while marginalized populations bear most of the risks of biased or harmful AI. The AI industry and the tools it creates are already widening inequality, enriching the powerful and disadvantaging those who are struggling.
Addressing these problems, and making sure AI is just, accountable, and safe, will require serious structural change to how technology is developed and how tech corporations are run. Ethical principles and in-house ethical reviews are a positive step, but we need a lot more.
I’ve had an amazing time here. I climbed my way from an entry level role at Google in 2006 to an established position as a researcher and public voice on AI issues. I marshalled and presented evidence in the service of more accountable technology. I’m proud of what I did, and grateful to work with amazing colleagues.
I have tried hard to offer evidence and pathways for positive structural change, but over time I realized that my presence “at the table” was more about the appearance of an inclusive debate, rather than seriously contending with the problems in the company. In the meantime, the issues of AI, bias and inequity grew more urgent, and I became increasingly worried.
Part of my response was to co-found the AI Now Institute at NYU with Kate Crawford, establishing a home for rigorous research that could examine the social implications of AI, and communicate this to the public. This has been an unqualified success, and we’ve already had extraordinary impact across research and policy. The other part was to begin organizing: history shows that centralized power rarely concedes without collective action.
What began as an experiment — can we apply labor organizing to address tech’s ethical crisis? — became one of the most difficult and gratifying efforts I’ve ever been involved in. Organized tech workers — you! — have emerged as a force capable of making real change, pushing for public accountability, oversight, and meaningful equity. And this right when the world needs it most.
Leaving Google is deeply emotional for me, and I don’t know all of the ways I’ll miss it. I’m lucky because I get to continue my work at AI Now. And I’d be much sadder if I didn’t see many hundreds of Googlers establishing themselves as leaders, contributing their brilliance to organizing, and refusing to stand silent in the face of leadership’s dangerous complicity. Please, keep going!
The stakes are extremely high. The use of AI for social control and oppression is already emerging, even in the face of developers’ best of intentions. We have a short window in which to act, to build in real guardrails for these systems, before AI is built into our infrastructure and it’s too late.
I offer my unwavering support and love to those of you who continue to do amazing work here, and who have taken risks to support others. In solidarity with all of you who will continue this essential work within Google, I’ll close by offering an incomplete map of where I see future tech organizing moving.
- Unionize — in a way that works
There are good unions and there are awful unions, but building structural power that will allow Google workers to hold leadership accountable is something worth doing. And generally, this is called a union. This doesn’t mean letting an outside union “organize” Google and dictate worker concerns (this would be a bad model, in my view). In many places it’s quite possible to DIY a union. It does mean continuing to build strong relationships with each other, and doing this in a way that recognizes both prior art and the significant, specific concerns plaguing the tech industry — including its outsized influence on all other sectors. And it means continuing to place equity concerns at the center of organizing, and including TVCs at the helm of decision-making — the company (and “the future of work”) is moving in a direction where soon everyone but upper management will be a TVC. In considering which structure best accomplishes these goals, I would advocate boldness, remembering that the labor protections we have were won through organizing and collective action, not the other way around.
- Protect conscientious objectors and whistleblowers
We’ve seen too many reports of retaliation and punishment against those who speak up about unethical projects and toxic workplace conditions. This serves to prevent necessary change and to make accountability impossible. Google needs worker-led structures that can ensure it’s safe to speak about the darker side of the company. These should include protections for whistleblowers who alert the public to dangerous or unethical projects that put them at risk. The public deserves to know how, and where, powerful technical systems are shaping their lives and opportunities.
- Demand to know what you’re working on, and how it’s used
Too often, those designing and developing technical systems don’t know how they’ll be used, or by whom (see: Maven, Dragonfly, etc). The right to know what you’re working on, and how it’s applied, should be recognized as fundamental. And to uphold this right, Google’s infrastructures and processes need to adapt, providing a “chain of title” from design through to application. This is also a structural requirement for meaningful accountability and compliance. Such a demand should be at the core of ethical organizing, and could be extended to ensure that the public is aware of where specific technologies that impact their lives and communities are being applied, and by whom.
- Build solidarity with those beyond the company
The application of Google’s tech goes well beyond the relatively homogeneous Google campuses (“billions of users or none,” I’ve heard many an exec opine). As such, people living in contexts well outside of Google are often in the best position to speak to the true impacts of Google’s tech — whether it be the click-workers training data for AI models, or the communities most impacted by YouTube’s engagement-driven algorithm. Holding Google accountable and ensuring a safe workplace and will require that tech worker organizers form strong alliances with independent researchers, journalists, and communities on the front lines. This has the added benefit of building more powerful organizing structures.
 I use Google in place of Alphabet, as it’s more readable, and whatever the corporate structure, we’re talking about a single company that by and large relies on a shared set of centralized resources.
 I use the term “AI” loosely, to include machine learning and related technologies that rely on data and encoded assumptions to “understand” a given domain, topic area, etc., and work to apply this understanding to the interpretation and classification of novel data inputs.
 The sale of Cloud AI APIs means that the reach and implications of Google’s AI offerings spread well beyond the company’s official product offerings, and are largely obscure to Googlers and to those most affected by the use of these technologies.
 While I’m focusing on Google, for obvious reasons, this critique applied to a number of other large tech companies, including Amazon, Facebook, and Microsoft. Given the computational and data resources required to build AI at scale, there are only a handful of companies on the planet have the capacity to create it.
 See AI Now’s 2018 report for more on this: https://ainowinstitute.org/AI_Now_2018_Report.pdf
 My recent Congressional testimony expands on these points: https://republicans-science.house.gov/legislation/hearings/full-committee-hearing-artificial-intelligence-societal-and-ethical, as does AI Now’s Discriminating Systems report: https://ainowinstitute.org/discriminatingsystems.pdf