The Perils of GenAI: Google’s Gemini Chatbot Spreads Super Bowl Falsehoods

Rakesh Sahani
Predict
Published in
3 min readFeb 12, 2024

--

Google Gemini AI
Google Gemini AI

Stranger things have happened: Google’s Gemini chatbot, formerly known as Bard, was discovered spreading false information about the Super Bowl’s 2024 outcome.

This discovery clarifies the intrinsic constraints and possible risks associated with an over-reliance on generative artificial intelligence (GenAI).

Reports from a Reddit thread claim that Gemini, powered by the same-named GenAI technology from Google, has been giving false information in response to questions concerning Super Bowl LVIII, implying that the game has already happened.

Even more worrisome are the fake statistics Gemini offers, such as wholly fictitious player performance breakdowns.

A Reddit user posted an example of Gemini attributing players’ almost impossible feats to them. Patrick Mahomes of the Kansas City Chiefs is said to have run 286 yards for two touchdowns and an interception, whereas Brock Purdy is said to have only run 253 yards and one touchdown.

These flagrant errors cast doubt on the validity of GenAI models in addition to undermining the accuracy of the information supplied. In its foolishness, Gemini is not alone. One further thing that has been linked to the false narrative around the 2024 Super Bowl is Microsoft’s Copilot chatbot.

Copilot states that the game is over despite though there isn’t any reliable information to back up its assertions. It even quotes fake final scores.

It’s important to remember that Copilot, like Gemini, uses a GenAI model similar to GPT-4, which powers OpenAI’s ChatGPT. That being said, ChatGPT seems to exercise more restraint and accuracy in its comments than its peers, declining to spread the Super Bowl myth.

Even while these occurrences seem insignificant at first glance, they are stinging reminders of the underlying limitations of modern GenAI technology. GenAI models, while remarkable, are devoid of true intelligence; instead, they rely only on probabilistic methods that are extracted from large datasets. As such, their outputs are prone to mistakes, falsifications, and intentional misrepresentations.

The fundamental problem with GenAI is its probabilistic nature. These models are not perfect, even if they are very good at producing intelligible text based on patterns discovered in training data. Examples of grammatically correct but illogical outputs — like the incorrect Super Bowl statistics — highlight the dangers of depending entirely on AI-generated content.

Furthermore, the possible consequences of false information spread by GenAI go far beyond harmless sports trivia. Unchecked GenAI outputs can have far-reaching and deep effects, ranging from supporting immoral activities like torture to feeding negative preconceptions and conspiracy theories.

Along with other GenAI manufacturers, Google and Microsoft also admit that their AI applications are not perfect. Nevertheless, these disclaimers are sometimes tucked away in cryptic tiny print, which may prevent users from seeing them. Customers should therefore proceed with care and skepticism while utilizing GenAI-powered solutions.

The Super Bowl false information is a sobering reminder of the need for caution in the digital era, even though it may seem minor compared to more extreme instances of GenAI misconduct. We must continue to be astute and critical information consumers as we incorporate AI into more and more facets of our lives to avoid falling victim to its drawbacks.

--

--

Rakesh Sahani
Predict

We Stand for featuring the latest information on Technology, Software, AI, and Gaming first on the desk.