Google Prepares For The Content Ouroboros

M.H. Williams
Into The Discourse

--

What happens when AI search summarizes content already written by AI? We’re going to find out soon, I’d hazard. According to The New York Times, Google is currently testing a product called Genesis, which looks to be a way to generate news content. The idea is that Genesis can take in details about current events and then generate news stories from that.

Google sees Genesis as more of a personal assistant to journalists, automating some tasks. “In partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide A.I.-enabled tools to help their journalists with their work.” said Google spokesperson Jenn Crider in a statement to the NYT. “Quite simply, these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

Which tasks? Google points to headline options and “other writing styles”, though I’d point out you can just do that with Bard or ChatGPT right now. Google has pitched Genesis to several high-profile news organizations, including The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp. Assuming they decide to bring Genesis into their newsrooms, the implications of such a move loom large.

Google itself points to Genesis handling headlines and other writing styles, things that would generally be handled by editors and writers. I think that’s because it knows that Genesis probably can’t handle things like research and actual writing.

In terms of research, it’s well known that generative AI is prone to “hallucinations”, the euphemistic term for chatbots outright making up facts and lying. AI doesn’t have an understanding of context or fact. There’s a story in this week’s newsletter about AI-generated newsbots being unable to tell when they’re being faked out, which will become increasingly common.

We already have stories about the real-world implications of AI lies as well, whether that’s veteran lawyers using ChatGPT to file a legal brief full of fake cases, the professor who found found himself being accused of sexual assault during some research, or an Australian mayor who is suing for defamation because ChatGPT accused him of being a part of bribery scandal. AI lies, and it does so with great frequency and bravado.

What about actually writing? Let’s say you have three major news organizations all using the same tool for their standard news writing. All boilerplate daily news goes into Genesis, which spits out similar stories for each outlet. First, you lose any sort of differentiation in the news from outlet to outlet. An outlet’s voice is the combination of its editors and writers as a group, and here you’d be adding the same tool into the pipeline at each place. You’re diluting each site’s overall voice. That’s even before you get to the personnel implications: how are junior writers supposed to get to the point of being veterans and proper reporters if they aren’t getting a chance to actually write?

The final thought I want to leave you with is the one mentioned in the hed to this essay. Remember, Google and Microsoft are adding generative AI to Search and Bing. Google’s iteration of this is the Search Generative Experience (SGE), which looks to replace the classic “ten blue links” you generally see in each search. SGE scrapes bespoke articles and then generates an answer to a query. The issue is the answer doesn’t forward users to those original sources and it pushes articles down in search results.

The “content ouroboros” is Google’s SGE scanning a bunch of news articles all written by Google’s Genesis and then developing a summary based on those articles. A machine summarizing a machine’s work. It’s processed gruel all the way down, without proper context, fact checking, or distinct style. What happens when internet news eats itself?

I don’t have a problem with generative AI as a tool, per se. (I’ve even played around with the AI tools in Google Docs, even though I haven’t been impressed with the results.) As an add-on to journalism, whether that’s checking spelling, transcription, small rewrites here and there, or offering alternate headlines, these are some useful options AI can add. But those are small additions to the process of research, writing, and editing, which is especially important in journalism. Providing correct information and the context surrounding that information is important, because news doesn’t exist without the context.

At the end of the day, I want a person who knows about the subject matter telling me about the news. And if the New York Times, the Washington Post, and The Wall Street Journal do go this route, there’s space for smaller outlets to rise providing bespoke, well-researched daily news. If AI is doing all the writing, unique differentiation — likely from a human writer — becomes a selling point.

This was the essay portion of my weekly newsletter, Stuff Worth Knowing. Every week, I round up the most important news across film, television, video gaming, and tech. If you just want the essays, I’ll be posting them here in the future, but if you want the full news round-up, you can subscribe to Stuff Worth Knowing for free! (Or chuck in a little money.)

--

--

M.H. Williams
Into The Discourse

Reviewer at @PCMag, among other things. Black guy, glasses, and a tie.