Policy makers: Please don’t fall for the distractions of #AIhype
Below is a lightly edited version of the tweet/toot thread I put together in the evening of Tuesday March 28, in reaction to the open letter put out by the Future of Life institute that same day.
Okay, so that AI letter signed by lots of AI researchers calling for a “Pause [on] Giant AI Experiments”? It’s just dripping with #AIhype. Here’s a quick rundown.
The letter can be found here: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.
For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
So that already tells you something about where this is coming from. This is gonna be a hot mess.
There a few things in the letter that I do agree with, I’ll try to pull them out of the dreck as I go along. With that, into the #AIhype. It starts with “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research”.
Footnote 1 there points to a lot of papers, starting with Stochastic Parrots. But in that paper, we are not talking about hypothetical “AI systems with human-competitive intelligence” in that paper. We’re talking about large language models.
And as for the rest of that paragraph: Yes, AI labs are locked in an out-of-control race, but no one has developed a “digital mind” and they aren’t in the process of doing that.
Could the creators “reliably control” #ChatGPT et al? Yes, they could — by simply not setting them up as easily accessible sources of non-information poisoning our information ecosystem.
Could folks “understand” these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we’d be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes.
Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the “Sparks paper” and OpenAI’s non-technical ad copy for GPT4. ROFLMAO.
On the “sparks” paper, see:
On the GPT-4 ad copy, see:
And on “generality” in so-called “AI” tasks, see: Raji et al. 2021. AI and the Everything in the Whole Wide World Benchmark from NeurIPS 2021 Track on Datasets and Benchmarks.
I mean, I’m glad that the letter authors & signatories are asking “Should we let machines flood our information channels with propaganda and untruth?” but the questions after that are just unhinged #AIhype, helping those building this stuff sell it.
Okay, calling for a pause, something like a truce amongst the AI labs. Maybe the folks who think they’re really building AI will consider it framed like this?
Just sayin’: We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about “too powerful AI”.
Instead: They’re about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).
They then say: “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”
Uh, accurate, transparent and interpretable make sense. “Safe”, depending on what they imagine is “unsafe”. “Aligned” is a codeword for weird AGI fantasies. And “loyal” conjures up autonomous, sentient entities. #AIhype
Some of these policy goals make sense:
Yes, we should have regulation that requires provenance and watermarking systems. (And it should ALWAYS be obvious when you’ve encountered synthetic text, images, voices, etc.
Yes, there should be liability — but that liability should clearly rest with people & corporations. “AI-caused harm” already makes it sound like there aren’t *people* deciding to deploy these things.
Yes, there should be robust public funding but I’d prioritize non-CS fields that look at the impacts of these things over “technical AI safety research”.
Also “the dramatic economic and political disruptions that AI will cause”. Uh, we don’t have AI. We do have corporations and VCs looking to make the most $$ possible with little care for what it does to democracy (and the environment).
Policymakers: Don’t waste your time on the fantasies of the techbros saying “Oh noes, we’re building something TOO powerful.” Listen instead to those who are studying how corporations (and governments) are using technology (and the narratives of “AI”) to concentrate and wield power.
Start with the work of brilliant scholars like Ruha Benjamin, Meredith Broussard, Safiya Noble, Timnit Gebru, Sasha Costanza-Chock and journalists like Karen Hao and Billy Perrigo.
Update 3/31/23: The listed authors of the Stochastic Parrots paper have put out a joint statement responding to the open letter.