Member-only story
LangSmith 101
Build production-grade LLM applications using LangSmith
LangSmith by LangChain is a platform that streamlines the development of LLM applications with debugging, testing, evaluating, and monitoring. This article covers its key components with code snippets and examples.
Building an application powered by a large language model (LLM) differs slightly from building a regular advanced analytics application. Typically, the first preference in LLM applications is to use pre-built models, as training LLMs can be expensive. This means that our focus as developers is on crafting the correct prompts.
However, with prompts, you're dealing with texts coming in and out, not numbers. So, the usual ways to measure errors or accuracy don't work here. Plus, imagine reading every input and output for evaluation — that would take days or months if you have thousands of prompts to evaluate. So, you need a workflow that efficiently creates and tests these prompts to see how well your LLM application is doing without drowning in manual checks. This is where LangSmith comes into the picture. Here are LangSmith's main features:
- Fast Debugging: Easily fix issues in new chains, agents, or tools for better performance.
- Visualize Components: See how your app's chains, LLMs, and retrievers work.