[Research Paper Summary] RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation

Himanshu Bamoria
Athina AI
Published in
3 min readOct 7, 2024

Original Paper: https://arxiv.org/abs/2403.05313

Abstract:

We find that, for large language models in long-horizon generation tasks, iteratively updating a chain of thoughts using information retrieval considerably improves reasoning and generation at scale, while largely minimizing hallucinations.

Specifically, when an initial zero-shot CoT is established, the suggested method — retrieval-augmented thoughts, or RATs — revises each thought step one at a time with retrieved information relevant to the task query, present and past thinking stages.

Applying RAT to GPT-3, GPT-5, GPT-4, and CodeLLaMA-7b improves their performance relative to short-term generation models by approximately 13.63%, 16.96%, and 19.2% on average increase in rating score on all generations duties they are tested on: code generation, mathematical reasoning, creative writing, and embodied task planning over distant time horizon.

The demo page can be found at this https URL

Summary Notes

Figure: Pipeline of Retrieval Augmented Thoughts (RAT). Given a task prompt (denoted as I in the figure), RAT starts from initial step-by-step thoughts (𝑇1, 𝑇2, · · · , 𝑇𝑛) produced by an LLM in zero-shot (“let’s think step by step”). Some thought steps (such as 𝑇1 in the figure) may be flawed due to hallucination. RAT iteratively revises each thought step (𝑇 ★ 1 , 𝑇★ 2 , · · · , 𝑇★ 𝑖−1 , 𝑇𝑖) using RAG from an external knowledge base (denoted as Library).

Introduction

Owing to rapid technical developments in AI these days, LLMs have progressed as powerful tools for processing natural languages.

Errors that violate facts, known as “hallucinations”, can often hinder the use of LLMs in long-horizon generation tasks — meaning they usually cannot be employed for code generation, mathematical reasoning, or scheduling jobs.

However, Retrieval-Augmented Thoughts (RAT), a peculiar way of reformulating and revising human thought, introduces significant new elements into LLMs’ ability to generate and reason.

Key Methodologies

Two advanced techniques which form part of the RAT method are RAG and Chain-of-Thought prompting.

At the beginning of the process, the LLM produces an initial zero-shot CoT, which is then repeatedly changed based on information from outside sources or experiences.

Using this methodical approach, all cognitive actions are guided by exact data. It can be likened to how a human problem-solving process modifies its endpoint when within range of further information.

  1. Original Thought Generation: Rabidks generates a sequence of intermediate reasoning steps or thoughts at the time of task request. Then all later modifications are built on this zero-shot CoT.
  2. Information Retrieval and Revision: Data is obtained from an external knowledge base to revise each thought step. In this way the retrieval process ensures that updates will be appropriate in context, with an emphasis on the task prompt and earlier mental steps.
  3. Progressive Refinement: In contrast to traditional approaches which extract and revise entire CoTs all at once, RAT takes a step-by-step approach. This reception eliminates receiving errors by making sure that every idea is carefully considered and polished rather than being added onto existing accurate processes where the possibility exists for faults arising out of omission negligent act of omission.

Main Findings and Results

RAT has implications that extend beyond simple performance metrics; it presents a completely new approach to enhance LLM’s ability for reasoning.

RAT is relevant to a variety of real-world issues. By dynamically incorporating external knowledge whilst greatly reducing hallucinations and improving factual accuracy, it does this for two reasons:

  • Software Development: A new code generation capability that eliminates the unproductive process of software debugging, like improved resulted code dependability can speed up and simplify the software development cycle.
  • Educational Tools: Enhanced mathematical reasoning is good for building smart tutoring systems that help students on a step-by-step basis through difficulties.
  • Automated Planning: In robotics and automated systems RAT is able to improve how jobs are scheduled, making possible much more precise multi-step processes with robots functioning dependably.

Conclusion

Retrieval-Augmented Intelligence makes a major breakthrough in machine intelligence by resolving important issues of long-horizon reasoning tasks.

With RAT the precision and efficiency of LLMs, as well as their scope of application across various fields, is being enhanced through injecting contextually relevant information repeatedly into thinking-making processes.

RAT demonstrates that it is possible to combine retrieval methods with advanced language models. This in turn opens the door for us into thinking about new and significantly more powerful artificial intelligence systems.

Feel free to check out more blogs, research paper summaries and resources on AI by visiting our website.

--

--

Himanshu Bamoria
Athina AI

Co-founder, Athina.AI - Enabling AI teams to build production-grade AI apps 10X faster. https://hub.athina.ai/