Why do Large Language Models Hallucinate?

And how can we prevent it?

Luke Skyward
4 min readMay 1, 2023
AI hallucinations, generated using Midjourney. Prompt by the author.

Current LLMs are great! However, they still might generate bizarre or incorrect information, known as hallucinations.

An example of a hallucination. ChatGPT describes the content of an article that does not exist. Source: Wikipedia

In this article, we’ll dive into the phenomenon of hallucinations in AI language models, explore their causes, and discuss practical strategies to minimize them, ensuring that we get the most out of AI-generated content.

Understanding Hallucinations: What are they?

Hallucinations are outputs of large language models (LLMs) that deviate from facts or contextual logic. They can range from minor inconsistencies to completely fabricated or contradictory statements. In order to effectively address this issue, it’s important to recognize the different types of hallucinations that can occur:

  1. Sentence contradiction: This occurs when an LLM generates a sentence that contradicts a previous one. For example, “The sky is blue today” followed by “The sky is green today.”
  2. Prompt contradiction: This happens when the generated sentence contradicts the prompt used to generate it. For instance, if you ask an…

--

--

Luke Skyward

Making your life easier by sharing AI 🤖 and productivity hacks ⌚ Creating a digital brain 🧠 But most important of all - doing cool stuff