RAGs Do Not Reduce Hallucinations in LLMs — A Math Deep Dive

Freedom Preetham
Autonomous Agents
Published in
8 min readFeb 16, 2024

Too much marketing cool-aid has been spent on stating that RAG avoids or reduces hallucinations in LLMs. This is not true at all.

Retrieval-Augmented Generation (RAG) models represent a sophisticated intersection of information retrieval and generative machine learning techniques, designed to enhance the text generation process by leveraging a vast repository of information. Despite their advanced architecture, these models inherently struggle to eliminate hallucinations — misleading or factually incorrect generated content. The root causes of this limitation can be traced back to the intricate mathematical formulations and assumptions embedded within the RAG framework.

If only you could have fixed hallucinations of a pre-trained LLM through contextually relevant prompts. It would have been a solved problem by now! Contextual relevance (RAGs) improve domain specificity, not hallucination!

The only place hallucinations can be fixed is within the LLM.

Retrieval is based on semantic similarity and maximizing log likelihoods and is not a mathematical framework for formal verification of facts. The only purpose of RAG is to provide additional context that is proprietary or nuanced to enable “relevance” beyond the general information that LLMs are trained on.

Read more about why hallucinations occur in LLMs in my previous blog: Mathematically Evaluating Hallucinations in LLMs

How is RAG Used?

RAG is merges the capabilities of pre-trained language models with information retrieval to enhance the generation of text. It is designed to leverage a vast corpus of text data, enabling it to produce responses that are not only relevant but also rich in detail and contextually accurate. Here’s an overview of how RAG operates:

  • Preprocessing: Indexes a large dataset as a knowledge base.
  • Query Formation: Converts queries into semantic vectors for retrieval.
  • Document Retrieval: Finds relevant documents using nearest neighbor search algorithms.
  • Context Integration: Augments query with retrieved documents for enriched context.
  • Text Generation: Generates informed responses with a pre-trained language model.

Adding contextual relevance through RAG does not reduce hallucinations in LLMs (As you cannot influence model drifts though this). It only increases your semantic relevance scores! This is not the same as hallucination of the system.

Contextual relevance does not have any bearing on the LLM’s reasoning ability or the constraints on the probabilistic selection strategy at the logit layer. Hallucination is a problem of reasoning and not relevance! Any amount of relevant text fed through RAG to a machine will retain the original perplexity and entropy in the system to hallucinate independent of the text.

People also use RAGs for fine-tuning the the LLMs to induce data drifts. Even here, what you are increasing is the vocabulary, knowledge and context of the system to include your proprietory data. This is the major problem of claiming that RAGs reduces hallucination. It does not. It increases contextual relevance of the domain.

Contextual relevance is a small part of the reasons for LLMs hallucinating. The largest part is it’s reasoning capabilities.

Mathematical Formulations in RAG Models

RAG models integrate two critical phases: retrieval of relevant information followed by generation of text based on this information. Each phase is governed by complex mathematical constructs that, while optimizing for semantic relevance and coherence, do not inherently safeguard against the generation of hallucinations.

Detailed Retrieval Phase

The retrieval phase in a RAG model is not merely about fetching relevant documents; it is about understanding the query in a high-dimensional space and finding the nearest neighbors in terms of semantic similarity based on cosine similarity or other related metrics. Here is a cosine similarity measure:

The process of can be mathematically represented by embedding the query q and documents {Di​} into vectors in a latent space:

The retrieval function R can then be defined as:

where cos denotes the cosine similarity between the query and document vectors. This process relies on dimensionality reduction techniques and similarity metrics that, while effective, introduce an initial layer of approximation and potential source of error.

Generative Phase Modeling

Upon retrieving relevant documents, the generative phase employs a conditional language model to produce text. This phase can be mathematically represented as a sequence of conditional probabilities, optimized to generate a sequence Y=(y1​,…,ym​) given the query q and retrieved documents D∗:

where y<i​ represents all tokens preceding yi​, and Θ the parameters of the model. The objective is to maximize the log-likelihood of the generated sequence:

However, this optimization does not inherently account for the veracity of each y_i​ against external truths (this formal verification is a different process than RAG), laying the groundwork for hallucinations.

Mathematical Insights into Hallucination Phenomena

Hallucinations in RAG models stem from multiple deep-rooted mathematical challenges inherent in both the retrieval and generative processes.

Retrieval Imperfections and Semantic Ambiguity

The initial challenge arises from the retrieval phase’s reliance on semantic similarity, which is quantified by inner product spaces or cosine similarities. This metric, while capturing semantic closeness, does not differentiate between factually accurate and inaccurate information. The retrieved documents D∗ might thus contain misleading information, mathematically represented as a misalignment in the vector space:

where Accuracy(d) quantifies the factual correctness of document d. A negative Δaccuracy​ signifies the retrieval of less accurate documents over more accurate ones.

Limitations in Conditional Probability Modeling

The generation phase is predicated on conditional probability distributions that are inherently complex to model accurately. The optimization process, typically involving gradient descent, seeks to maximize the likelihood of generating a coherent response given the retrieved documents. However, this optimization does not explicitly account for the factual accuracy of each token generated, leading to:

The absence of a direct mechanism to validate each piece of generated information against a factual benchmark or external knowledge base means that the model can “hallucinate” information that fits well within the generated context but is factually incorrect.

Generative Phase and the Complexity of Factual Consistency

The generative phase compounds the issue by optimizing for sequence likelihood without direct factual validation. The absence of a mathematical constraint to enforce factual accuracy allows for the propagation and generation of inaccuracies:

where F denotes a set of factually correct tokens and τ a threshold probability that ensures factual consistency. The lack of such constraints in conventional RAG models leads to the generation of hallucinated content.

To integrate a factual verification process necessitates a substantial enhancement of the model’s architecture, incorporating:

where I_fact​ is an indicator function assessing the factual accuracy of yj​ against a set of facts F, and λ balances the contribution of semantic coherence and factual accuracy. This is computationally very expensive.

Overfitting and Data Bias

The mathematical optimization techniques employed in training RAG models, such as gradient descent, can exacerbate the model’s tendency to overfit to the training data or inherit its biases. This is particularly problematic when the training data contains inaccuracies, misleading the model to learn and replicate these inaccuracies during generation:

where L represents the loss function, D the training dataset, and F a factual accuracy component not typically included in standard formulations.

Mathematical Complexity in Integrating Factual Consistency

Integrating factual consistency into the RAG framework introduces significant mathematical complexity. It requires not only an accurate representation of factual information but also a computationally feasible method to compare generated content against this factual representation continuously. This integration could hypothetically be modeled as an optimization problem with constraints on factual accuracy, significantly increasing the computational burden and complexity of the model.

Proposal: Theoretical Advances

Eliminating hallucinations through RAG is impossible. Meanwhile, I propose addressing hallucinations through a multifaceted approach, integrating advancements in both the retrieval and generative phases, underpinned by rigorous mathematical formulations.

Note that this is computational expensive and almost intractable as the size of corpus and the number of facts that needs to checked is varied and abstract.

Enhanced Document Retrieval with Factual Verification

Improving document retrieval requires a balance between semantic relevance and factual accuracy. A potential approach involves modifying the retrieval function to include a factual verification component:

where verif(q,d,F) quantifies the alignment of document d with a set of verified facts F, and α,β are parameters that balance semantic similarity with factual accuracy.

Probabilistic Generative Modeling with Explicit Fact Checking

Incorporating explicit fact-checking into the generative model involves augmenting the conditional probability with a term that penalizes factual discrepancies directly:

where δ is a penalty parameter for factual inaccuracies, and I is an indicator function identifying tokens yi​ that contradict the set of verified facts F_Y​ relevant to the generated content Y.

Integrating External Knowledge Bases

A robust approach to mitigating hallucinations involves the integration of external knowledge bases directly into the model’s architecture, providing a real-time benchmark for factual accuracy:

where Ψ is a function measuring the discrepancy between the generated content and the external knowledge base K, and λ modulates the influence of factual accuracy on the overall loss function.

Final Thoughts

Through the lens of advanced mathematical formulations and theoretical insights, it becomes evident that the challenge of hallucinations in RAG models is both complex and multifaceted. Addressing this issue necessitates a concerted effort to refine the mathematical frameworks governing both retrieval and generation processes, incorporating sophisticated mechanisms for factual verification and the integration of external knowledge.

Since RAGs are being subsumed into LLMs, the best place to address hallucinations are within the LLMs in itself.

--

--