The Hands-On LLMs Series
Prepare your RAG LangChain application for production
Lesson 8: LLMOps. Prompt monitoring. Serverless infrastructure. RESTful API. Gradio.
→ the 8th out of 8 lessons of the Hands-On LLMs free course
By finishing the Hands-On LLMs free course, you will learn how to use the 3-pipeline architecture & LLMOps good practices to design, build, and deploy a real-time financial advisor powered by LLMs & vector DBs.
We will primarily focus on the engineering & MLOps aspects. Thus, by the end of this series, you will know how to build & deploy a real ML system, not some isolated code in Notebooks (we haven’t used any Notebooks at all).
More precisely, these are the 3 components you will learn to build:
- a real-time streaming pipeline (deployed on AWS) that listens to financial news, cleans & embeds the documents, and loads them to a vector DB
- a fine-tuning pipeline (deployed as a serverless continuous training) that fine-tunes an LLM on financial data using QLoRA, monitors the experiments using an experiment tracker and saves the best model to a model registry
- an inference pipeline built in LangChain (deployed as a serverless RESTful API) that…