Don’t freak out when your machine hallucinates…

Enrique Dans
Enrique Dans
Published in
3 min readOct 3, 2023

--

IMAGE: Someone’s hand smoking a joint
IMAGE: Ahmed Zayan — Unsplash

I enjoyed reading this research by a Polish company called Tidio about how algorithms hallucinate, entitled “When machines dream: a dive in AI hallucinations” (pdf), which both defines and typifies the phenomenon, as well as adding some background from users.

The problem of hallucinating algorithms has been well known for a long time now, and is due solely to problems arising from statistics, similar to those that occur, on a completely different scale, to the multicollinearity that arises when working with variables with high levels of overlap. These same types of effects happen when we increase the number of variables to billions and therefore the possibility of correlations of all kinds, and mean that algorithms will return answers that can range from the completely false or even defamatory, to things that seem straight out of a bad LSD trip.

I find it disturbing that 22% of people believe that these issues are caused by governments or other entities with an agenda. Apart from an inclination toward conspiracy theories, this shows that nearly a quarter of us have no idea about basic statistics.

The only way to deal with hallucinations in generative algorithms, apart from working on improving their performance, is through a better understanding of how algorithms work. When an algorithm hallucinates it ceases to…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)