Google CEO peddles #AIhype on CBS 60 minutes
Another tweet thread turned into a blog post, to keep it all in one place, reacting to this tweet/clip from CBS 60 Minutes (as flagged by Melanie Mitchell):
This is so painful to watch. @60Minutes and @sundarpichai working in concert to heap on the #AIHype. Partial transcript (that I just typed up) and reactions from me follow:
Reporter: “Of the AI issues we walked about, the most mysterious is called ‘emergent properties’. Some AI systems are teaching themselves skills that they weren’t expected to have.” “Emergent properties” seems to be the respectable way of saying “AGI”. It’s still bullshit.
As @mmitchell_ai points out (read her whole thread; it’s great) if you create ignorance about the training data, of course system performance will be surprising.
Reporter: “How this happens is not well understood. For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know.” Is there Bangla in the training data? Of course there is:
Unidentified interviewee: “We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali.” What does “all of Bengali” actually mean? How was this tested?
Later in the clip @sundarpichai says: “There is an aspect of this which we call, all of us in the field, call it as a black box. You know, you don’t fully understand, and you can’t quite tell why it said this or why it got wrong. […]”
Reporter: “You don’t fully understand how it works, and yet you’ve turned it loose on society?” Pichai: “Let me put it this way: I don’t think we fully understand how a human mind works, either.” Did you catch that rhetorical sleight of hand?
Why would our (I assume, scientific) understanding of human psychology or neurobiology be relevant here? The reporter asked why a company would be releasing systems it doesn’t understand. Are humans something that companies “turn loose on” society? (Of course not.)
The rhetorical move @sundarpichai is making here invites the listener to imagine Bard as something like a person, whose behavior we have to live with or maybe patiently train to be better. IT. IS. NOT.
More generally, any time an AI booster makes this move (“we don’t understand humans either”) they’re either trying to evade accountability or trying to sell their system as some mysterious, magical, autonomous being. Reporters should recognize this and PUSH BACK.
Still later in the clip, regarding a short story that Bard produced, which the reporter found moving, he asks: “How did it do all of those things if it’s just trying to figure out what the next word is?”
Pichai responds: “I’ve had those experiences talking with Bard as well. There are two views of this. You know there are a set of people who view this as Look, these are just algorithms. It’s just repeating what its seen online. Then there’s the view where these algorithms are showing emergent properties: to be creative, to reason, to plan, and so on, right? And personally, I think we need to be, we need to approach this with humility.”
You know what approaching this with humility would mean @sundarpichai? It would mean not talking about “emergent properties” which is really a dog whistle for “AGI” which in turn is code for “we created a god!” (Or maybe “We are god, creating life”).
Approaching this humility would mean not putting out unscoped, untested systems (h/t @timnitGebru in her recent presentation with Emile Torres) and just expecting the world to deal. It would mean taking into consideration the needs and experiences of those your tech impacts.
Postscriptum: I’ve learned since from Talia Ringer (who watched the full episode) that the reporter uses the phrase “Artificial General Intelligence” — so we’re definitely in that territory.