Gemini: Google’s Gaffe

A terrible blunder for Apple amidst the AI race

Krish Kawle
The Catalyst
4 min readMar 5, 2024

--

Introduction:

The advent of OpenAI’s ChatGPT has changed the way the public is viewing AI. Its power and vast capability are slowly dawning in the minds of millions of Americans as new AI devices roll out, causing paradigm shifts in industries. To keep up with OpenAI’s drive with AI, Google decided to take on AI with its own suite of products, like Bard or Gemini.

Challenges:

With the rise of AI comes new challenges to face. One example is Recall Bias. When a user prompts an AI to do something, the resulting image is one that seems to be a misinterpretation. For example, if a user asks an AI model to create a typical party, the AI may show an image of a group of people having a good time, except all of the people shown in the picture may appear to be white, or may not have women in the scene. As a result, it seems the AI model has a distorted idea that a party is when solely white men are having good data. Such a blatant blunder requires a more massive, and varied dataset that mimics real-world scenarios. However, sometimes, it can be challenging trying to find a solution, and that could lead to disaster.

What Happened:

When Google first rolled out Gemini, users noticed major flaws in the coding of the model. The company was already under hot water for the demo video of its product, mainly because the demo seemed to mislead consumers about the actual capabilities of the product. Yet, when people typed in their prompts to the AI, many were shocked by the results. One user asked the AI to recreate the founding fathers of America, to which the model replied with an image of a Native American man, an African-American Founding Father, and even an Asian Founding Father. Other users asked the model to show an image of a Viking, but many of the images showed the Vikings to be African-American. The most interesting case was when the AI showed an African pope, and a female pope when asked to show an image of the pope.

What is the reason?

While AI may encounter frequent challenges like a Recall bias, the problem that Google faces stems from a more complex view. As Vox explains, there are two schools of thought when it comes to dealing with bias. The company could train its AI model to reflect certain inequalities of the world, like having an AI model show a typical engineer to be a male instead of a female because most of the engineers are male, or, it could train its model to have a more idealistic, progressive outlook. In this case, an example would be showing the image of an engineer to be both a man and a woman.

This tussle between ideal vs. realistic could explain the snafu with Google. Mr. Prabhakar Raghavan, Google SVP, explained in his post about details of the incident. The team took steps to make sure the AI model does not fall into the challenges that were mentioned previously (like the Recall Bias). It was also very careful to show adequate diversity in its responses. However, this might have been Google’s undoing. In the name of promoting inclusivity, the model produced images like a female or a black pope to avoid having an answer that fits the typical characteristic of a white, male pope. Ultimately, whether the company should train Gemini to be a realist or an idealist all depends on public reaction, which can be difficult, especially with increasing polarization in America.

Fallout:

After Gemini’s release, the company faced a massive uproar and had to pause the use of the model. Celebrities like Elon Musk expressed their distaste for the performance of Gemini. Google had to apologize to Narendra Modi after Gemini accidentally slandered the politician. Many people viewed Google as continuing to be behind the race with AI. Stock prices were tumbling for days. Mr. Sundar Pichai, CEO of Google, deemed this blunder to be unacceptable for his team of engineers to overlook. Some people are calling for Mr. Pichai’s resignation.

Conclusion:

Technology is constantly improving at an accelerated rate. Like it or not, the world is sleepwalking into a new phase with AI continuing its integration into daily life. To maximize the benefits of AI, people need to come together and decide what values should remain in place, and what values should AI adopt. The time is now, the future, previously seemingly distant, is appearing to be a lot closer now.

--

--