How to setup Token Usage Tracking in LangChain

Meta Heuristic
2 min readJun 16, 2023
Photo by Philip Oroni on Unsplash

LangChain, a powerful framework for designing language models, allows developers to orchestrate complex Natural Language Processing (NLP) pipelines effectively. While crafting these pipelines, it’s crucial to keep track of your token usage for specific calls, especially for those relying on paid APIs such as OpenAI’s GPT-3.

In this tutorial, we will dive into how to track token usage for your NLP calls using LangChain. Note that this tracking feature is currently only implemented for the OpenAI API.

Tracking Token Usage for a Single LLM Call

First, let’s consider a simple example of tracking token usage for a single Language Model call.

from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback

llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
with get_openai_callback() as cb:
result = llm("Tell me a joke")
print(cb)

The output will look like this:

Tokens Used: 42
Prompt Tokens: 4
Completion Tokens: 38
Successful Requests: 1
Total Cost (USD): $0.00084

With the context manager, every call inside it gets tracked.

Tracking Multiple Calls in Sequence

--

--

Meta Heuristic

ML design, best practices and optimizations. Providing in-depth tech and business consulting for founders, startups and established corporations