Can you sue an algorithm for defamation?

Enrique Dans
Enrique Dans
Published in
3 min readApr 8, 2023

--

IMAGE: Two bubble chats between a person and a generative assistant
IMAGE: Alexandra Koch — Pixabay

Amid the frenzy of testing large language models (LLMs) generative assistants like ChatGPT with every question imaginable, some users have found some unexpected and unwanted answers: a US law firm assistant asked ChatGPT for a list of law academics accused of sexual harassment. In response, the algorithm generated a list that wrongly included George Washington Law’s Jonathan Turley, and even made up details, adding coverage from a Washington Post newspaper article that never existed.

The case is not unique, and is yet another example of a large language model “hallucinating”: while some people attribute intelligence or even consciousness to algorithms, we should remember that they are in fact complex statistical models that establish links in ways that are often incorrect but in the form of an authoritative sentence. In a similar case, an Australian mayor has found that ChatGPT falsely claimed he had been in jail for corruption, when the opposite was true: he had been a courageous whistleblower in a major case of corruption.

I know from my own experience that these kinds of errors are to be expected: ChatGPT reports that I have been married to five women, none of whom were actually my wife; furthermore, it even provided convoluted stories about multiple children, all phrased with the utmost conviction.

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)