Introducing GraphRAG with LangChain and Neo4j

Part 2: Implementing and evaluating a GraphRAG application

Valentina Alto
Microsoft Azure
Published in
20 min readMay 12, 2024

--

In Part 1 of this series, we introduced the concept of GraphRAG as a new trending pattern in the context of LLMs-powered applications. In fact, if traditional Retrieval Augmented Generation (RAG) demonstrated great capabilities when we want our LLMs to answer based on our custom knowledge base, it still exhibits some limitations when it comes to retrieving relevant context.

GraphRAG is an approach that relies on storing the knowledge base into Graph databases, making the retrieval phase more efficient and henceforth leading to a more relevant contex to then let the LLM generate the best answer.

We saw how LangChain can help in creating a relevant graph database with pre-built components, as well as enabling hybrid search (vector + keyword) while retrieving relevant context. By leveraging both LangChain’s components and Neo4j’s AuraDB as graph database, we explored how to set up the environment to start experimenting with different approaches.

In this Part, we are going to see how a GraphRAG approach can enhance your LLM-powered applications. We will still leverage LangChain and Neo4j, evaluating results with LLM-powered metrics we will introduce later on.

--

--

Valentina Alto
Microsoft Azure

Data&AI Specialist at @Microsoft | MSc in Data Science | AI, Machine Learning and Running enthusiast