What “hallucinations” say about the parallels between generative algorithms and our brains

Enrique Dans
Enrique Dans
Published in
4 min readNov 18, 2023

--

IMAGE: The page in which The Cambridge Dictionary announces “Hallucinate” as the Word of the Year 2023

The Cambridge Dictionary has chosen hallucinate as its word of the year, highlighting the impact of generative algorithms on our lives and giving the word a new application: what happens when they make up answers to a question about which they lack sufficient information.

The ChatGPT algorithm was tried out by at least one million users in the first five days after it was launched, reaching a hundred million within two months, so the Cambridge Dictionary’s choice should come as little surprise: huge numbers of people all over the world now ask ChatGPT, Claude, Perplexity and other algorithms almost anything, sometimes very reasonably and sometimes mistakenly, for example believing they can provide diagnoses to mental and physical ailments, or simply offer a sympathetic ear, only to realize when they start hallucinating that these assistants have no idea what they’re being asked.

What I find interesting in all this is the parallelism — not, obviously, an identity — between algorithms and the human brain when they learn through the use of data.

Our brains function on the basis of statistics, and from correlations of various kinds learn how to explain phenomena or make predictions. And as with algorithms, our brains sometimes hallucinate, not…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)