Google and the drunken robot dilemma

Enrique Dans
Enrique Dans
Published in
3 min readMay 20, 2024

--

IMAGE: A cartoon-style illustration of a comically drunk robot, depicted with a silly, exaggerated expression and slightly off-balance

Google is heading into troubled waters: at its latest presentation, Google I/O, it announced the incorporation of results from its generative algorithm, Gemini, into its search results using what it calls “AI Overviews.”

As of May 14, many millions of the search engine’s users in the United States began receiving Gemini answers as part of their results pages, and many millions more will soon begin to do so worldwide.

What at first appears a major innovation; the company’s response to generative AI, comes with a potentially very significant problem: while the answers that Google delivers in the form of links to other pages, where it is simply passing on or distributing information, are protected by the well-known (and controversial) Section 230 of the Communications Decency Act of 1996, the answers that Google produces using its own generative algorithm, Gemini aren’t; they are created by a tool owned by the company itself. No disclaimer can fix this: even if the user that receives the answer could be ok with that, the person mentioned in that same answer probably wouldn’t.

The tendency of generative algorithms to routinely “get drunk” — producing vague, ridiculous or outright spurious correlations and generating what has incorrectly been called “hallucinations” — can end up triggering potential liabilities.

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)