How Google Fired Its Employee Tells Us We Need to Regulate Artificial Intelligence

Polis: Center for Politics
4 min readJun 7, 2022

--

Jennifer Ouyang (PPS ’24)

Jennifer Ouyang (PPS ‘24)
Jennifer Ouyang (PPS ’24)

Almost a year ago, Google fired its co-lead of the AI ethics team, Timnit Gebru, for co-authoring a research paper on the bias of AI language models. While Google claimed the paper “didn’t meet our bar for publication”, it only talked about the dangers of natural language processing, something tech giants too often ignore in their pursuit of profit. This event is a flagrant indication that we cannot rely on tech giants to hold themselves accountable for the ethical development of AI and related products, against their monetizing incentives. We need governmental regulation on the development and implementation of AI so we can protect consumers from algorithmic discrimination and bias.

Data is the future and artificial intelligence (AI) is becoming the biggest growing market in the United States. The market is expected to reach 126 billion by 2025, impacting every phone and internet user. In the heat of AI development, many scholars are raising questions about the ethical challenges facing AI and its immeasurable potential to discriminate against millions of consumers based on race, gender, and religious beliefs. If unchecked, algorithmic bias can be our future.

Timnit Gebru is a prominent researcher in the field of AI ethics. In the paper that lost her job at Google, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, she talked about the dangers of AI and language models. From the end product to the initial stages of selecting datasets to train AI systems, the process of developing AI is fraught with the danger of becoming violent, racist, and discriminatory.

The paper points to the problematic nature of the way the AI model is trained: with a massive but unfiltered dataset that normalizes racist and discriminatory language. Such a model intrinsically favours the Western-centric and homogenous narrative, the majority that leaves the largest “linguistic footprint” online. This is the case for GPT-3, a new language generator that generates hateful and discriminatory comments based on sensitive keywords. For how intelligent the large language model is and how easily it mimics human language, it is also shocking to see how insensible it is to racism and violence.

The changes in language and words we use are closely correlated with ongoing social movements and societal changes. This is especially problematic when considering the scalability of AI and computers. From the main source of information in our daily lives to the platforms we socialize online, giant tech companies have immense control over the trajectory of our vernacular. This means the power to influence public opinion is in the hands of a few powerful individuals incentivized purely by maximizing their capital gain.

Countries in Europe are already recognizing the danger of unrestrained AI development. On April 21, 2021, the European Union proposed a regulation on AI that would require a series of conformity assessments for high-risk AI systems before they are allowed to be offered on the market. The proposal also delineates risk management as a continuous process where a monitoring system will continuously evaluate the AI system’s performance even in the post-market phase.

The current policy in the United States fails to address problems in algorithmic bias. In fact, there is still no government regulation and vigilance on big tech’s employment of AI models. The federal Algorithmic Accountability Act was introduced in 2019 in Congress, but it failed to be enacted.

Government and the Federal Trade Commission (FTC) should work towards a regulatory framework on AI to protect consumer rights and freedom from discrimination. We need to follow the footsteps, if not surpass the works of the EU. To ensure algorithmic transparency, we need to closely monitor the algorithms, from their datasets to a series of assessments in the end product. The FTC has jurisdiction over commercial entities through a complex patchwork of laws. With the newly appointed FTC Chairwoman Lina Khan, a prominent figure calling for aggressive regulation against a big tech monopoly, we can hopefully anticipate a future where the AI market is more heavily and fairly regulated.

We need to recognize and applaud the courage of Timnit Gebru in standing up against the malpractices of tech giants. If it is still in the government’s interest to preserve the basic right to equality of individuals in the country, then we need regulations to keep these tech giants in check.

Jennifer Ouyang (PPS ‘24) is a Public Policy Undergraduate at Duke University’s Sanford School of Public Policy. This piece was submitted as an op-ed in the Spring ’22 PUBPOL 301 course. This content does not represent the official or unofficial views of the Sanford School, Polis, Duke University, or any entity or individual other than the author.

--

--