Generative algorithms and the sleep of reason

Enrique Dans
Enrique Dans
Published in
3 min readMay 10, 2024

--

IMAGE: An illustration of a computer personified as daydreaming at a desk, capturing the whimsical idea of a computer engaging in human-like thought, complete with thought bubbles containing abstract symbols related to a generative algorithm’s process

The growing use of generative algorithms raises the question as to who is responsible when they hallucinate (let’s stop using this term), particularly if this leads to legal issues.

Max Schrems, an Austrian lawyer and activist who has been already making life difficult for some US companies over data exchange agreements between Europe and the United States by pointing out that the National Security Agency there has constantly failed to guarantee privacy rights, is now suing OpenAI for violating its users’ privacy by producing false information about people, a problem that OpenAI admits it cannot correct.

In short, the question we now face is whether our privacy is being violated if someone makes up something about us? Can our privacy be violated by someone who has no information stored about us? It could, obviously, be a case of defamation, as when a professor found that ChatGPT said that he had been accused of sexual harassment, but was that an infringement of his privacy?

The answer requires some thought. ChatGPT, for example, has married me to the wife of the founder of my company, while Perplexity certainly perplexed me when it said I met my wife through a Twitter message, as well as that I have three children, when I have just one daughter. For me, these are mistakes of little transcendence, and that I use as…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)