Member-only story
Build Industry-Specific LLMs Using Retrieval Augmented Generation
Organizations are in a race to adopt Large Language Models. Let’s dive into how you can build industry-specific LLMs Through RAG
Companies stand to gain a lot of productivity improvements through LLMs like ChatGPT. But try asking ChatGPT “what is the current inflation in the U.S.” and it gives:
I apologize for the confusion, but as an AI language model, I don’t have real-time data or browsing capabilities. My responses are based on information available up until September 2021. Therefore, I cannot provide you with the current inflation rate in the U.S.
Which is a problem. ChatGPT is clearly missing relevant timely context, which could be essential while making informed decisions.
How Microsoft Is Solving This
In the Microsoft Build session Vector Search Isn’t Enough, they lay out their product that combines less context-aware LLMs with vector search, to create a more engaging experience.
The talk starts from the opposite direction of this piece — from the point of view of Elastic Search (or vector search) — and the idea that search by itself is limited, and adding the layer of LLMs can vastly improve…