Understanding LLMOps: Large Language Model Operations

How LLMs are changing the way we build AI-powered products and the landscape of MLOps

Leonie Monigatti
12 min readMay 2, 2023
“Sorry, we can’t ship it like this. It’s just too large…” — Large Language models (LLMs) in production (Image drawn by the author)

This article was originally published on the Weights & Biases’ blog “Fully Connected” on April 21st, 2023.

It feels like the release of OpenAI’s ChatGPT has opened Pandora’s box of large language models (LLMs) in production. Not only does your neighbor now bother you with small talk about Artificial Intelligence (AI), but the Machine Learning (ML) community is talking about yet another new term: “LLMOps”.

LLMs are changing the way we build and maintain AI-powered products.

LLMs are changing the way we build and maintain AI-powered products. This will lead to new sets of tools and best practices for the lifecycle of LLM-powered applications.

This article will first explain the newly emerged term “LLMOps” and its background. Then, we will discuss how building AI products is different with LLMs than with classical ML models. Based on these differences, we will discuss the differences between MLOps and LLMOps. And finally, we will discuss what developments we can expect in the LLMOps space in the near future.

--

--

Leonie Monigatti

Developer Advocate @ Weaviate. Follow for practical data science guides - whether you're a data scientist or not. linkedin.com/in/804250ab