What’s up with Google’s ethics?

Maya
ACM at UCLA
Published in
8 min readMay 19, 2021

Disclaimers: ACM at UCLA gets funds from the UCLA Computer Science department, which is in part funded by Google. Also, this blog post does not necessarily reflect the opinions of ACM at UCLA and the UCLA CS department.

Google building on sunny day with bikes in front

Back in late 2020, the firing of prominent AI ethics researcher Dr. Timnit Gebru roiled the ethical AI community. Gebru, who co-founded the Black in AI affinity group and has long been a champion of diversity in the tech industry, co-led Google’s AI Ethics team, a diverse team that produced groundbreaking work challenging many mainstream AI practices. She is well-known for the seminal study co-authored with AI ethics researcher Joy Buolamwini, Gender Shades, which found facial recognition to be significantly less accurate (think: coin flip accuracy) when identifying women and people of color, particularly dark-skinned women. Buolamwini’s written testimony was used during a Congressional hearing on the effect of facial recognition technology on civil rights and liberty, particularly in the hands of law enforcement (check out her awesome Ted talk here!)

The termination happened after Gebru coauthored a paper along with six others, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” (layman’s version here). This paper, which contained over 100 references and presented no new experiments, described how training large natural language processing (NLP) models, such as this 1-trillion parameter language model or Google Search’s BERT, just one time can produce as much carbon dioxide as five cars in their lifetimes—models typically get trained multiple times during development. It also covered the dangers of sexist and racist language being used for training language models, as well as the ethics of using “well-trained” (i.e. accurate at recognizing/generating language) models to fool people, such as through misinformation on elections or the COVID-19 pandemic.

Sidenote: one of the criticisms from famed Google AI lead Jeff Dean, who oversaw the firing of Gebru, was that the paper didn’t present enough information on how to mitigate the aforementioned issues. Interestingly enough, in April 2021 Google researchers (including Jeff Dean) published a paper, “Carbon Emissions and Large Neural Network Training,” which cites the Stochastic Parrots paper (after being roasted by Twitter for initially not including the citation).

Additionally, Gebru’s termination was allegedly expedited after an email she sent to the internal group Google Brain Women and Allies expressing her frustration with the treatment of minority workers at Google and the inefficacy of equity, diversity, and inclusion work at the company. Both she and her AI Ethics co-lead Dr. Margaret Mitchell also faced pushback from the company over their flagging of issues such as colleagues accused of sexual harassment and gender disparity among Google employees. An online petition protesting Gebru’s firing and calling for increased transparency and reforms to Google’s research process was signed by over 2,500 Google employees and 4,000 other supporters.

The anger resurfaced when, after being locked out of her corporate accounts for five weeks, Margaret Mitchell (who founded the AI Ethics team) was fired on February 19, 2021. On the same day, Google announced that it had finished its internal investigation into the ousting of Gebru and was beginning to institute new changes in its research, diversity, and employment, such as tying executive pay to reaching diversity and inclusion goals, streamlining the process for publishing research, and enacting new procedures around sensitive employee exits (many of these changes were suggested by Dr. Gebru during her time at Google). A happy ending? Not quite. Many other individuals on the AI Ethics team, such as Dr. Alex Hanna, have been harassed online and via email by others in the company. In April 2021, another prominent AI researcher, Samy Bengio, left his management position at Google Brain (though he did not explicitly mention Gebru in his departure email, he lamented on the difficulties of creating a diverse and inclusive team). Bengio was allegedly ousted from his management position on the AI Ethics team in February.

“Don’t Be Evil”

Information about everything above can be found in hundreds of articles that graced the Internet in December and again in February, and I encourage you to read more about it—there are many nuances about AI, the prominence of Dr. Gebru, and the research process that one Medium opinion piece cannot capture after a couple hours of research. However, from afar, Google seems to have a pattern of firing and/or using scare tactics on employees whose work and values contradict business practices—whether Dr. Gebru; Rebecca Rivers, who protested Google’s bidding on a contract with Trump’s Custom and Borders Patrol; Meredith Whittaker and Claire Stapleton, who led the famous 2018 walkouts protesting against, among other issues, Google’s treatment of sexual harassment allegations; or former Head of International Relations Ross LaJeunesse, who was effectively removed for one-too-many ethics complaints.

As groundbreaking as her work was in the field of AI Ethics, the termination of Dr. Gebru and others raises questions about whether true institutional change and accountability can come from companies whose profits depend on the deployment and expansion of products like energy-heavy NLP models that can be used for misinformation or dangerous mistranslation. Corporate statements and pathos-laden flowery language are often used to distract from moneymaking decisions like removing community captions, firing prominent researchers, and entering contracts with oil producer Aramco and the Saudi Arabian government, often at the expense of consumers or employees from marginalized communities.

Although Google has committed to doubling the size of its AI Ethics team, now led by Marian Croak, to 200 researchers, and CEO Sundar Pichai has promised to investigate the circumstances of the departures, faith will likely not fully be restored in Google’s research practices. The choice of one Black woman to replace another is not lost on those who are aware of tokenization practices in companies (yes, of course we want Black women in executive positions at tech companies, but when only 2.3% of Google’s workforce is comprised of Black women, and said employee worked shoulder-to-shoulder with the guy sharing AT&T data with the NSA, for the company to assume substituting one Black woman for another will placate the masses is insulting). Internal investigations conducted by Google on Google (especially on a household name like Jeff Dean) will not carry much weight in the public eye; additionally, one can hopefully see the problem with an AI Ethics team curated by Google executives rather than by people like Timnit Gebru and Margaret Mitchell, whose leadership produced one of the most diverse teams at Google. Until Google is more transparent about its termination processes, allows and encourages researchers to produce work that criticize the company’s business practices, prioritizes marginalized individuals both within hiring pipelines AND in the company, and actually prioritizes ethics, it will not gain back the trust that Sundar Pichai desperately wants.

The ripple effect

Google may not like it, but Dr. Gebru’s firing resulted in extended criticism and pushback against the company—for example, affinity groups such as Queer in AI, Black in AI (co-founded by Gebru, linked above), and Widening NLP have removed Google as a sponsor, in response to both Gebru’s firing and to the firing of recruiter April Christina Curley (yet another Black woman being fired at a company at which Black and Native American women leave at the highest rates). The sacrifice of these groups and their dedication to resistance is admirable; the voices of coalitions of marginalized people, especially when coming together in intersectional resistance, are strong and unyielding. It begs the question: what can you, fellow student, do?

Speak up. Start small! Following people like Dr. Gebru and Anima Anandkumar, and others they support, will inevitably bring you further into the burgeoning circle of activist tech workers. Eventually, you will have a plethora of issues to talk about, I promise. On the other side, if you’re feeling confrontational, join groups like “AI & Deep Learning Memes for Back-propagated Poets” and call out the sexist posts. Your Facebook notifications will be popping for days.

Make people uncomfortable. Call them out on their behavior, their bad jokes, their casual use of slurs. During job interviews, ask what the company does in regards to diversity, inclusion, and racial justice. If the response is “Nothing right now, but we’re working on it,” walk out—if you can afford to. At the least, maybe put on an exaggerated look of disappointment. (It’s important to note here that you shouldn’t do this at the expense of your own safety and wellbeing; this is directed towards people in positions of privilege who have opportunities to stand up for marginalized communities.)

Support your peers. Especially your Black, Latinx, Native American, LGBTQIA+ peers, or anyone who might not feel supported in monochromatic engineering schools, work environments, or tech spaces in general. Attend diversity-oriented Town Halls, ensure your big events don’t conflict with or take away from diversity-oriented initiatives or events. Give your local queer hackathon the love, support, and marketing it deserves.

Redefine success. Many of the prospective and current students of computer science departments can likely remember how, during application seasons, the major flex of many schools was the amount of graduates working at company XYZ. Contrary to popular belief, however, the end-all and be-all of a computer science degree is not a career at Facebook, Amazon, Apple, Microsoft, or Google, and engineering schools need to stop treating it as such (check out social-impact-oriented tech opportunities at Impact Labs and impactful.org!). How about the defining characteristic of schools being how many students got to take on an independent project? How happy they were when they graduated? Yes, these metrics don’t really fit in with our typical definition of success, but perhaps that is exactly why we need them to combat the bright lights syndrome of a Big Tech career.

Realize that tech can’t solve everything. We must hold companies accountable for their ethical practices both from within and outside, through community organizing and advocating for legislation that is knowledgeable in and tough on tech business practices. Google’s features supporting Black-owned businesses (which actually led to a rise in racist fake reviews), while great for PR, cannot replace community networks, mutual aid, advocacy, or voting (specifically in local elections!).

Just because our products are only 0s and 1s doesn’t mean they are devoid of socioeconomic impact. As tech, including artificial intelligence, continues on its popularity trajectory, ethics must be wholly integrated with the code we write. The outspoken nature, commitment to values, and above all groundbreaking research of the Gebru- and Mitchell-headed AI Ethics team have dropped a stone in a slightly-turbulent pond—even small acts of resistance add up into something bigger, and as students, the future is in our hands. It is our responsibility to ensure the ripples that Gebru, Mitchell, and others started continue to spread.

Thank you to Arjun Subramonian for their contributions to this blog post!

--

--

Maya
ACM at UCLA

computer science major by day, English major by night