Glue on a pizza? š Here is how Google explained Geminiās mess-up.
You probably saw many images like this one in the screenshot above.
Just to recap in case you missed it:
A couple of weeks ago, Google announced theyād bring AI Overviews to everyone in the US.
How it works: you type a certain query or question, and Google provides you with a generated text output at the top of the page, before showing you the SERP.
(Just like ChatGPT).
There was a lot of worry about it in the SEO community too, by the way.
However, it works differently than chatbots and other LLM products. Theyāre not simply generating output based on training data.
The model is integrated with their core web ranking systems and designed to perform traditional āsearchā tasks, like identifying relevant, high-quality results from the index.
Thatās why AI Overviews donāt just provide text output, but also relevant links so people can learn more.
Accuracy is the best friend of search, and Googleās AI Overviews are built to only show information backed up by top web results.
But hereās what happened:
Youāve probably noticed that Reddit threads began to rank high on Google, exactly as Medium and LinkedIn articles did a few months ago.
Now itās Redditās turn.
Try to search for something, and there is a high chance you will see a Reddit thread in the top 10 pages on SERP.
Thus, high-ranking content is more likely to get added to AI Overviews.
Long story short, one user typed the query ācheese not sticking to pizzaā, and AI Overview suggested adding some glue.
The funny fact?
It turned out the source was an 11-year-old comment from a user with the nickname āF*cksmithā.
We didnāt have to wait long for Googleās clarification. Here is what they said:
1. Gemini had some slip-ups in understanding online language and context. It didnāt invent wild answers but sometimes misread the tone or sarcasm on websites.
2. Gemini works by linking with Googleās main search systems to find relevant info but mistakenly treated advice like using glue on pizza or other bizarre suggestions as serious answers.
3. Some things were made up by users. They mentioned fake posts suggesting dangerous advice, like leaving dogs in hot cars, which were not true.
4. Google has already made changes to the system and identified errors. From now on, systems will limit their use of forum content that can mislead users.
5. They also developed a mechanism that should be able to recognize satire and irony, as well as identify āmeaningless queries that shouldnāt be shownā.
So, letās see how these changes improve our search experience.
Your thoughts? Has AI impacted your searches lately?