The Generator

The Generator covers the emerging field of generative AI, with generative AI news, critical analysis, real-world tests and experiments, expert interviews, tool reviews, culture, and more

Member-only story

Featured

What Are AI Hallucinations, Anyway?

Thomas Smith
The Generator
Published in
5 min readMar 19, 2025

--

Illustration by the author via Midjourney

We’ve all heard that AI models hallucinate. But what does that actually mean?

AI hallucinations happen when Large Language Models and other AI systems imagine information that’s not accurate, but that’s consistent with patterns in their training data.

For example, imagine that I asked an LLM to come up with a list of 10 barbecue restaurants in Lafayette, California, where I live.

There are really only three I could name. But since I’ve asked the model for 10, it’s very likely that it would imagine at least a few non-existent barbecue restaurants in an attempt to honor the intent of my query.

Crucially, it would likely write compelling, realistic-sounding descriptions for the imagined restaurants. Maybe it would say they were located on Mount Diablo Blvd (a real road) or include a realistic-sounding, made up quote from the local chamber of commerce about the restaurant’s service to the community.

Those kinds of imagined pieces of information are hallucinations.

Again — and this is important — they’re often consistent with the patterns in the model’s training data.

--

--

The Generator
The Generator

Published in The Generator

The Generator covers the emerging field of generative AI, with generative AI news, critical analysis, real-world tests and experiments, expert interviews, tool reviews, culture, and more

Thomas Smith
Thomas Smith

Written by Thomas Smith

CEO of Gado Images | Content Consultant | Covers tech, food, AI & photography | http://bayareatelegraph.com & https://aiautomateit.com | tom@gadoimages.com

Responses (5)