I asked Meta.ai for a pictrure of technology doomsters' looking worried; it presumed the white male.

Demote the Doomsters

The ultimate futility of guardrails on AI — or us

Jeff Jarvis
Published in
4 min readMay 21, 2024

--

This paper in Science on “managing extreme AI risks amid rapid progress” with 25 co-authors (Harari?) is getting quick attention. The paper leans heavily toward the AI doom, warning of “an irreversible loss of human control over AI systems” that “could autonomously deploy a variety of weapons, including biological ones,” leading if unchecked to “a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.”

Deep breath.

Such doomsaying is itself a perilous mix of technological determinism and moral panic. There are real, present-tense risks associated with AI — as the Stochastic Parrots paper by Timnit Gebru, Margaret Mitchell, Emily Bender, and Angelina McMillan-Major carefully laid out — involving bias of input and output, anthropomorphization and fraud (just listen to ChatGPT4o’s saccharine voice), harm to human workers cleaning data, and the environment. The Science paper, on the other hand, glosses over those current concerns to cry doom.

That doomsaying makes many assumptions.

It concentrates on the technology over the human use of it. Have we learned nothing from the internet? The problems with it have everything to do with human misuse.

--

--

Jeff Jarvis
Whither news?

Blogger & prof at CUNY’s Newmark J-school; author of Geeks Bearing Gifts, Public Parts, What Would Google Do?, Gutenberg the Geek