The Hands-On LLMs Series

The LLMs kit: Build a production-ready real-time financial advisor system using streaming pipelines, RAG, and LLMOps

Lesson 1: LLM architecture system design using the 3-pipeline pattern

Paul Iusztin
Decoding ML
Published in
12 min readJan 5, 2024

--

Image by DALL-E

→ the 1st out of 8 lessons of the Hands-On LLMs free course

By finishing the Hands-On LLMs free course, you will learn how to use the 3-pipeline architecture & LLMOps good practices to design, build, and deploy a real-time financial advisor powered by LLMs & vector DBs.

We will primarily focus on the engineering & MLOps aspects. Thus, by the end of this series, you will know how to build & deploy a real ML system, not some isolated code in Notebooks (we haven’t used any Notebooks at all).

More precisely, these are the 3 components you will learn to build:

  1. a real-time streaming pipeline (deployed on AWS) that listens to financial news, cleans & embeds the documents, and loads them to a vector DB
  2. a fine-tuning pipeline (deployed as a serverless continuous training) that fine-tunes an LLM on financial data using QLoRA, monitors the experiments using an experiment tracker and saves the best model to a model registry

--

--

Paul Iusztin
Decoding ML

Senior ML & MLOps Engineer • Founder @ Decoding ML ~ Content about building production-grade ML/AI systems • DML Newsletter: https://decodingml.substack.com