LangChain simplified
Since ChatGPT made it’s debut on November 30 2022, AI chatbots have revolutionized the world. Given the right amount of training data, they can now perform a wide variety of tasks quite easily. This begs the question, “How can I leverage ChatGPT for my own applications?”
One exciting new framework in this space is called “LangChain”. “LangChain is a framework for developing applications powered by language models.” — Not limited to just Chat GPT.
While it provides a myriad of benefits, for the purpose of this article, we’re going to look at the most basic and widely used benefit it provides — The ability to create Context-aware LLMs. You can use LangChain to create a chatbot that gets trained on data specific to your application’s use-case to fine tune your application’s responses.
Getting Started
You can look at LangChain’s documentation in this link — https://python.langchain.com/docs/get_started/introduction
Note — If you’re using python on windows, use ‘pip’ but if you’re using it on linux or macOS, use ‘pip3’.
pip install langchain
This will install the bare minimum requirements of LangChain. A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc. By default, the dependencies needed to do that are NOT installed. You will need to install the dependencies for specific integrations separately.
The LangChain CLI is useful for working with LangChain templates and other LangServe projects. Install with:
pip install langchain-cli
Now, the best way to look at logs for your LLM is by using LangSmith. You need to sign up for it via this link — https://smith.langchain.com/ and then you can set environment variables in a .env file like so —
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="..."
LangChain allows you to run several LLMs including Lama locally but for this article, we will use openAI’s sdk.
First we’ll need to import the LangChain x OpenAI integration package.
pip install langchain-openai
Next, create an OpenAI account to get an OpenAI api Key here — https://platform.openai.com/account/api-keys .
You will get about $5 of credit grants to use OpenAI apis for free.
Once we have a key we’ll want to set it as an environment variable in the .env file by running:
export OPENAI_API_KEY="..."
Now, create a file, call it llm.py for now.
We can then initialize the model:
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
load_dotenv()
llm = ChatOpenAI()
Now, the llm variable is essentially your client to openAI. At this point, you can start asking it questions you would ask chatgpt by using the invoke method like this —
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
load_dotenv()
llm = ChatOpenAI()
res = llm.invoke("Hi")
print(res.content)
Now, for your specific application, you may want to guide your responses for a usecase. You can do that by using a prompt template. The text that goes into the invoke function is essentially our prompt to the LLM.
For example, let’s say we wish for our LLM to tell us about the macro nutrients in a particular food item that our user would enter. We would write it like this -
from dotenv import load_dotenv
load_dotenv()
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI()
promptTemplate = """You are a dietician. Tell me the average macronutrient values for {macroNutrientList} for the following dish.
If possible, include both the absolute values and the percentage of daily recommended intake."""
humanTemplate = "{text}"
chatPrompt = ChatPromptTemplate.from_messages([
("system", promptTemplate),
("user", humanTemplate)
])
chain = chatPrompt | llm
res = chain.invoke({"macroNutrientList": "protein, carbohydrates, fats, sugar", "text": "What are the nutrients in a sweet potato?"})
print(res.content)
This resulted in an output as follows —
However, it is often more convenient to deal with a string directly rather than the object returned by the llm. So, we add an output parser to our chain as follows —
from langchain_core.output_parsers import StrOutputParser
outputParser = StrOutputParser()
chain = chatPrompt | llm | outputParser
This was all you needed to get started with LangChain. Congratulations! You have your first personal Chat-Bot. In the next blog post, we will look at how to use your own personal data in order to fine-tune your LLM to your application’s specific needs as well as what a LangChain project architecture should look like.