Member-only story
Featured
What Are AI Hallucinations, Anyway?
And how to use them to your advantage as a creator
We’ve all heard that AI models hallucinate. But what does that actually mean?
AI hallucinations happen when Large Language Models and other AI systems imagine information that’s not accurate, but that’s consistent with patterns in their training data.
For example, imagine that I asked an LLM to come up with a list of 10 barbecue restaurants in Lafayette, California, where I live.
There are really only three I could name. But since I’ve asked the model for 10, it’s very likely that it would imagine at least a few non-existent barbecue restaurants in an attempt to honor the intent of my query.
Crucially, it would likely write compelling, realistic-sounding descriptions for the imagined restaurants. Maybe it would say they were located on Mount Diablo Blvd (a real road) or include a realistic-sounding, made up quote from the local chamber of commerce about the restaurant’s service to the community.
Those kinds of imagined pieces of information are hallucinations.
Again — and this is important — they’re often consistent with the patterns in the model’s training data.