Knowledge Graphs, Memory & Semantic Structure in RAG: Takeaways from LangChain’s Memory Hackathon

Chia Jeng Yang
WhyHow.AI
Published in
3 min readApr 7, 2024

On Saturday, a member of WhyHow.AI attended the Memory Hackathon, organized by New Computer, Anthropic, MongoDB and LangChain.

Besides having an amazing time building and extending the functionality of WhyHow.AI with an integration of some of our tools with LangSmith and LangServe, it was great to see the speed at which Knowledge Graphs and other graph structures are gaining mainstream adoption. It was exciting to see some of these infrastructure providers like LangChain really push the envelope about the nature of memory, and how it can be represented.

Just yesterday, Garry Tan talked about how RAG will need structured knowledge representation on top of vector databases.

Here are some takeaways about Knowledge Graphs that we came across during the event:

All teams were focused on how to represent meaning in their product — i.e. what is the best way to represent a wide range of things, concepts, and relationships

30% was the number of all the teams we came across publicly that were explicitly looking to implement knowledge graphs into their architecture. The teams that used knowledge graphs performed well in the judging process.

Most teams were using memory as a way to extend chat histories, but some teams were thinking about nuanced structures of memory, specifically around personalization.

It was interesting to see memory be defined in granular ways that reflected how people were thinking about memory. We will define two different types of memory used: Chat History Memory, and Contextual Memory.

Chat History Memory was straightforward and simply reflected ways to extend a chat window conversation so that a conversation could be continued within a single session.

Context Memory proved to be much harder, and was a question of how to inject specific context in a reliable manner in the future. An example of this is from New.Computer in the image below.

We are making some assumptions about the nature of the previous chat history in this public example on New.Computer’s website. In the above image, you can see that the idea of bringing scallion flatbread is not a reflection of something that was ‘to be reminded of later in the conversation’, but rather a reflection of how to inject specific topical context (Scallion Flatbread) from a completely separate, irrelevant conversation into this specific conversation about a welcome potluck.

This was a much trickier problem that involved thinking about structuring and storing topics in a granular, semantically meaningful way, so that it can be injected into the right context and the right point of time.

Knowledge graphs seemed to represent a good way to do so, and it appeared that people were trying to figure out ways to adopt them. “I really wanted to know how to build a Knowledge Graph, but I did not know where to start” seemed to be a common quote encountered.

WhyHow.AI is building tools to help developers bring more determinism and control to their RAG pipelines using graph structures. If you’re thinking about, in the process of, or have already incorporated knowledge graphs in RAG for accuracy, memory and determinism, we’d love to chat at team@whyhow.ai, or follow our newsletter at WhyHow.AI. Join our discussions about rules, determinism and knowledge graphs in RAG on our newly-created Discord.

--

--