Talking about a ‘schism’ is ahistorical

Emily M. Bender
9 min readJul 5, 2023

--

In two recent conversations with very thoughtful journalists, I was asked about the apparent ‘schism’ between those making a lot of noise about fears inspired by fantasies of all-powerful ‘AIs’ going rogue and destroying humanity, and those seeking to illuminate and address actual harms being done in the name of ‘AI’ now and the risks that we see following from increased use of this kind of automation. Commentators framing these positions as some kind of a debate or dialectic refer to the former as ‘AI Safety’ and the latter as ‘AI ethics’.

In both of those conversations, I objected strongly to the framing and tried to explain how it was ahistorical. I want to try to reproduce those comments in blog form here.

Photo of the split apple rock off the coast of ateroa/New Zealand https://en.wikipedia.org/wiki/Split_Apple_Rock The rock is large, round and grey and split in two. It is surrounded by blue water. The background has white clouds low but clear sky up above. There is a bird sittingon the right hand half of the rock.
Split Apple Rock CC BY-SA-2.0 Rosino

The problem with the ‘schism’ framing is that to talk about a ‘schism’ is to talk about something that once was a whole and now is broken apart — authors that use this metaphor thus imply that such a whole once existed. But this is emphatically not a story of a community that once shared concerns and now is broken into disagreeing camps. Rather, there are two separate threads — only one of which can properly be called a body of scholarship — that are being held up as in conversation or in competition with each other. I think this forced pairing comes in part from the media trying to fit the recent AI doomer PR pushes into a broader narrative and in part from the fact that there is competition for a limited resource: policymaker attention.

Scholarship and activism

The first thread, the one which can properly be said to include a body of scholarship, consists of work that comes from several positions: There are academics from a variety of fields looking at how the application of pattern matching at scale impacts people and social systems. There are researchers and others within industry seeking to improve things from the inside (and struggling upstream against the profit motive and ‘move fast break things’ modus operandi). There are investigative journalists working to document and expose the harms done in the development and deployment of this tech. There are community organizers and activists leading resistance to immediate harms such as the deployment of ‘AI’ in ever-expanding surveillance applications. And many individual people have moved across one or more of those roles.

I don’t want to imply that the folks I am describing here form one coherent community. It’s messy. There is emphatic contestation of framings. The unequal distributions of power (financial and otherwise) shapes what people can see, who gets listened to, and who is willing to cross what boundaries. What is in common across this group, however, is an engagement with actual harms to real people and our world.

To give just some highlights of this thread, I’d like to point to:

  • Latanya Sweeney’s 2013 article Discrimination in Online Ad Delivery, documenting how pay per click advertising with templatic generation of in-context ads (“Find out more about [NAME]” and “Has [NAME] ever been arrested?”) served up suggestions of criminal history much more frequently with African American sounding names than white sounding names — regardless, of course, of actual arrest history.
  • Safiya Noble’s 2018 book Algorithms of Oppression, showing how advertising-driven information access systems (Google in particular) offered up whole identities (notably but not only “Black girls”) for sale, and then counterfactually portrayed the resulting system (“Black girls” as a search term leading to pornography) as “organizing the world’s information”, i.e., just reflecting how the world is.
  • Joy Buolamwini & Timnit Gebru’s 2018 paper Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification exposed how computer classification of faces had differentially poorer performance for darker skinned people — and Deb Raji & Joy Buolamwini’s 2019 follow up Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products explored the impact of doing such a public audit.
  • Tawana Petty, Joy Buolamwini (through the Algorithmic Justice League) and others’ work since shifting the focus from bias in face recognition technology to resisting (and working towards public policy prohibiting) its use in surveillance.
  • Cathy O’Neil’s 2016 book Weapons of Math Destruction and Virginia Eubanks’s 2018 book Automating Inequality show how statistical models used as decision making systems entrench historical unfairness while displacing accountability. J. Khadijah Abdurahman’s 2022 essay Birthing Predictions of Premature Death gives a detailed example and is a stunningly beautiful and stunningly damning account of family policing, of the lack of protections against data collection in our country, & of the mindset of tech solutionism that attempts to remove ‘failable’ human decision makers.
  • Karën Fort, Gilles Adda, K. Bretonnel Cohen’s (2011) paper Last Words: Amazon Mechanical Turk: Gold Mine or Coal Mine?, an early work pointing to the exploitative nature of crowdworking (microworking) platforms. This theme is further developed in Veena Dubal’s work, Mary Gray & Siddarth Suri’s 2019 book Ghost Work and recent key investigative journalism by Karen Hao, Billy Perrigo and Josh Dzieza. Initiatives such as Turkopticon seek to help click workers build power and push back against such exploitation.
  • Ruha Benjamin’s 2019 book Race After Technology and Meredith Broussard’s 2023 book More Than a Glitch vividly locate the discrimination baked into these systems as a feature, given what they are created to do and by whom, rather than a ‘glitch’ or a ‘bug’. (See also Deb Raji’s 2020 essay How our data encodes systematic racism.)

There’s of course much more more that could be recommended — and a much deeper history than is apparent from that list (informed as it is by my own entry into this field in around 2016 from computational linguistics), including the IBM Black Workers Alliance (1970–early 1990s) and Computer People for Peace (1968–1974) and others.

My goal here is simply to give a sense of the depth of this work, how it is rooted in lived experience and in many cases actively engaged in addressing not just documenting problems, and the diversity of perspectives that it comes from. Mia Dand gives another overview in The AI Ethics Revolution — A Brief Timeline. The Radical AI Podcast and Tech Won’t Save Us are also excellent introductions to many of these topics.

Fantasies of white supremacy

Against the richness, the groundedness, and the urgency of the body of scholarship, journalism and activism, the other thread (called ‘AI Safety’) is thin, flighty, and untethered. It’s untethered from reality, untethered from lived experience, and untethered from scholarly tradition — in stark contrast to the multidisciplinary work described above, the bulk of the citations within ‘AI Safety’ writing are to a closed circle of mostly non-peer reviewed papers and blog posts.

On the one hand, it comes out of a vision of ‘artificial intelligence’ that takes ‘intelligence’ as a singular dimension along which humans can be ranked — alongside computers, ranked on the same dimension. This vision of ‘intelligence’ is rooted in the notably racist notions of IQ (and its associated race science).

For example, take the recent “Sparks of AGI” paper (non-peer-reviewed speculative fiction novella, uploaded to arXiv) from Microsoft Research. The first version of this paper took its definition of ‘intelligence’ from a 1997 WSJ editorial written in support of Herrnstein & Murray’s 1994 The Bell Curve. It appears that none of the authors on “Sparks of AGI” had actually read to the second page of the WSJ editorial, where the overtly racist claims that Black and Latinx people are (on average) less ‘intelligent’ than white people can be found. Once it was pointed out, the “Sparks” authors responded by editing their arXiv novella first to disavow the racism in the 1997 editorial and then to remove the citation altogether. This, of course, left “Sparks” without a definition of the thing it claimed to be seeking (and finding!) in GPT-4 output.

It gets worse, though. Those caught up in the fantasy of ‘AGI’ (‘artificial general intelligence’) and/or ‘ASI’ (‘artificial superhuman intelligence’) imagine that not only are they making machines that can join humans in their ranking on this scale of ‘intelligence’ but that they are creating godlike entities that will supercede humans. They imagine that this could be very good (e.g., Gary Marcus telling Ezra Klein that we need AI to ‘help’ us find a cure for Alzheimer’s or handle climate change) or very bad (AI going rogue and killing us all).

Geoffrey Hinton went on a media tour to ‘sound the alarm’ about AI after quitting Google earlier this year. When a CNN journalist asked if he wished he had stood behind previous whistleblowers such as Timnit Gebru when she was forced out of Google (over the Stochastic Parrots paper), he said that “their concerns aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”

Synthetic media creating non-consensual porn is existentially serious to its targets. Automated decision systems denying social benefits are existentially serious to those left without necessary supports. Shotspotter and similar technology that sends police in with the idea they are encountering a live shooter situation are existentially serious to Black and brown in the path of the police. False arrests mediated by face recognition system errors are existentially serious to the people arrested.

Hinton appears to be speaking from a mindset where violence is only worth speaking up over if it would kill off the entire human race. (Though elsewhere he suggests that climate change is an easier problem to solve, and therefore also not as serious?) It seems not irrelevant that Hinton holds enough privilege that he (and people like him) will be unlikely to experience harm from ‘AI’ unless everyone else on the planet does first.

This viewpoint — the focus only on ‘existential risk’, i.e., scenarios where all humans die — is threaded through a very bizarre ideology called ‘Longtermism’ which holds that humanity’s destined future is to live as computer simulations uploaded onto computers running in planets across the galaxy. Proponents of this ideology (who have obviously never worked in IT support) perform a utilitarian computation and argue that the happiness of those future beings (quantified, by pulling numbers of out their asses, as 10⁵⁸), outweighs whatever suffering could possibly befall today’s mere billions of people. All that matters, for them, is that this imagined future be brought about.

It is at this point that I have to reassure the audience that I am not making this up. It’s all nonsense, of course, but it is true that people out there (who furthermore control large amounts of money, though less post-FTX collapse) have articulated and continue to espouse these ideas. In other words, I’m not making this up, but they did.

Timnit Gebru and Émile Torres have done excellent work in tracing what they call the TESCREAL bundle of ideologies, connecting modern work on “AGI” with Longermism and ultimately eugenics. For an accessible overview I highly recommend this talk presented by Gebru at SaTML earlier this year.

No point in building bridges

When I laid all of this out to one of the journalists mentioned at the top of this post, she asked me if it was unfair to paint everyone who identifies with the ideas of ‘AI Safety’ with the same brush, to call them all racist. She also asked if I saw any way to build bridges between the two groups outlined above.

Let’s pause for a moment on the idea of it being ‘unfair’ to call someone racist. Frequently in US (and probably other) discourse around race, people behave as though ‘accusations’ of racism are as harmful as racist acts (such as might be described in such ‘accusations’). On the one hand, this acknowledges that racism is bad! So that’s a first step. On the other hand, it completely misses the point. Someone who genuinely wanted to work against racism (and any other system of oppression), upon learning that something they did contributes to (builds on, reinforces) a system of oppression, would be well served to investigate that further and see what needs to be done to stop the harm.

Furthermore, especially if one is working in a tradition (like ‘AI’) with racism at its roots, it’s not enough to quietly go along to get along. I told the journalist I have yet to find an ‘AI Safety’ person who puts time and effort into lifting up the voices of those scholars and activists (including many, many brilliant Black women) who have been addressing the real harms happening to real people now, or seriously addresses the racism at the heart of the ideologies ‘AI Safety’ draws on.

Finally, just like talking about a ‘schism’ is ahistorical, talking about ‘building bridges’ suggests that there is a rift to be healed or crossed — and that there is something of value on both sides of the rift, such that people from each side would benefit from visiting the other.

I hope that this post has made clear why those metaphors are inappropriate in this context. ‘AI Safety’ might be attracting a lot of money and capturing the attention of policymakers and billionaires alike, but it brings nothing of value. The harms being perpetrated today in the name of ‘AI’, through surveillance, inappropriate automation, displacement of accountability, labor exploitation, and further concentration of power, are urgent and demand attention (both academic and political). Setting up the work of the scholars, journalists, and activists like those I point to above as somehow equivalent to the ‘AI Safety’/TESCREAL clown car both devalues the work of the former and facilitates the latter’s grab for policymaker attention.

With thanks to Timnit Gebru, Alex Hanna and Meg Mitchell for feedback & discussion.

--

--

Emily M. Bender
Emily M. Bender

Written by Emily M. Bender

Professor, Linguistics, University of Washington// Faculty Director, Professional MS Program in Computational Linguistics (CLMS) faculty.washington.edu/ebender

Responses (16)