Member-only story
GenAI
Designing RAGs
A guide to Retrieval-Augmented Generation design choices.
Building Retrieval-Augmented Generation systems, or RAGs, is easy. With tools like LamaIndex or LangChain, you can get your RAG-based Large Language Model up and running in no time. Sure, some engineering effort is needed to ensure the system is efficient and scales well, but in principle, building the RAG is the easy part. What’s much more difficult is designing it well.
Having recently gone through the process myself, I discovered how many big and small design choices need to be made for a Retrieval-Augmented Generation system. Each of them can potentially impact the performance, behavior, and cost of your RAG-based LLM, sometimes in non-obvious ways.
Without further ado, let me present this — by no means exhaustive yet hopefully useful — list of RAG design choices. Let it guide your design efforts.
RAG components
Retrieval-Augmented Generation gives a chatbot access to some external data so that it can answer users’ questions based on this data rather than general knowledge or its own dreamed-up hallucinations.
As such, RAG systems can become complex: we need to get the data, parse it to a chatbot-friendly format, make it available and searchable to…