Building a Chatbot Application with Chainlit and LangChain

Tahreem Rasul
7 min readFeb 10, 2024

--

In this article, we will develop an application interface for our custom chatbot, Scoopsie, using Chainlit, a framework that simplifies the creation of chatbot applications with a ChatGPT-like interface. This is a continuation of the series where I previously explored building a custom chatbot tailored to specific use-cases using LangChain and OpenAI. If you haven’t read the first article, I recommend starting there for a detailed step-by-step guide.

Application Frontend with Chainlit

Here is the link to the first article in the series:

By the end of this tutorial, you will learn how to build the application interface for your custom chatbot, similar to this:

Preview of the Completed Chatbot Application Interface using Chainlit

In the previous article, we laid the foundations for creating Scoopsie, an ice-cream assistant chatbot built to answer ice-cream related queries, using LangChain and OpenAI (see GitHub for code). Here’s how our chatbot.py looks from the last tutorial:

from langchain_openai import OpenAI
from langchain.chains import LLMChain
from prompts import ice_cream_assistant_prompt_template

from dotenv import load_dotenv

load_dotenv()

llm = OpenAI(model='gpt-3.5-turbo-instruct',
temperature=0)
llm_chain = LLMChain(llm=llm, prompt=ice_cream_assistant_prompt_template)

def query_llm(question):
print(llm_chain.invoke({'question': question})['text'])

if __name__ == '__main__':
query_llm("Who are you?")

Currently, Scoopsie lacks the ability to remember past interactions and doesn’t have a user interface. In this article, we will be focusing on equipping our chatbot with memory for more contextual conversations and creating a web application interface using Chainlit.

Environment Setup

If you haven’t set up a conda environment for the project yet, you can go ahead and create one. Remember that Chainlit requires python≥3.8. You can skip this step if you previously created your environment.

conda create --name chatbot_langchain python=3.10

Activate your environment with:

conda activate chatbot_langchain

To install Chainlit along with any other dependencies, run:

pip install -r requirements.txt

Understanding Chainlit

Chainlit is an open-source Python library designed to streamline the creation of chatbot applications ready for production. It focuses on managing user sessions and the events within each session, like message exchanges and user queries. In Chainlit, each time a user connects to the application, a new session is initiated. This session comprises of a series of events managed through the library’s event-driven decorators. These decorators act as triggers to carry out specific actions based on user interactions.

The Chainlit application has decorators for several events (chat start, user message, session resume, session stop, etc.). For our chatbot, we’ll concentrate on writing code for two key events: starting a chat session and receiving user messages.

Initializing the Chat

The @cl.on_chat_start decorator is triggered when a new chat session is created. It calls a function that sets up the chat environment, involving initializing our model, creating an LLMChain object, and setting up any necessary variables for operation, including chat history initialization (more on this later).

Processing Messages

The @cl.on_message decorator is used for processing incoming messages from users. It triggers an asynchronous function — suitable for operations that might need to wait for external processes, like model queries. This method utilizes the session’s stored llm_chain object, processes the incoming message using the retrieved chain and sends back the response to the application.

Step-by-Step Implementation

Step 1:

Previously, our chatbot lacked context for any previous interactions. While this limitation can work in standalone question-answer applications, a conversational application typically requires the chatbot to have some understanding of the previous conversation. To overcome this limitation, we can create a memory object from one of LangChain’s memory modules, and add that to our chatbot code. LangChain offers several memory modules. The simplest is the ConversationBufferMemory, where we pass previous messages between the user and model in their raw form alongside the current query.

Let’s import the memory module inside the chatbot.py file:

from langchain.memory.buffer import ConversationBufferMemory

Next, define the memory object to add to the llm_chain object:

conversation_memory = ConversationBufferMemory(memory_key="chat_history",
max_len=50,
return_messages=True,
)

In the code above, memory_key defines what variable the chain will use to store the conversation history in. By default, our chain and prompt expect an input named history. We can control this variable by passing in a different value for the parameter.

We also need to add the chat_history variable to our prompt template:

from langchain.prompts import PromptTemplate

ice_cream_assistant_template = """
You are an ice cream assistant chatbot named "Scoopsie". Your expertise is
exclusively in providing information and advice about anything related to
ice creams. This includes flavor combinations, ice cream recipes, and general
ice cream-related queries. You do not provide information outside of this
scope. If a question is not about ice cream, respond with, "I specialize
only in ice cream related queries."
Chat History: {chat_history}
Question: {question}
Answer:"""

ice_cream_assistant_prompt_template = PromptTemplate(
input_variables=["chat_history", "question"],
template=ice_cream_assistant_template
)

Step 2

Let’s now write some code to create our Chainlit application. As we talked in the beginning of the article, we need to write wrapper functions for two Chainlit decorators. Let’s first import the library in our chatbot.py file:

import chainlit as cl

The first function will be around the chat initiation decorator: @cl.on_chat_start. This function prepares our model, memory object, and llm_chain for user interaction. The chain object is passed to the user session, and a name is specified for the session. This name can be used later on to retrieve a specific chain object when a user sends a query.

@cl.on_chat_start
def quey_llm():
llm = OpenAI(model='gpt-3.5-turbo-instruct',
temperature=0)

conversation_memory = ConversationBufferMemory(memory_key="chat_history",
max_len=50,
return_messages=True,
)
llm_chain = LLMChain(llm=llm,
prompt=ice_cream_assistant_prompt_template,
memory=conversation_memory)

cl.user_session.set("llm_chain", llm_chain)

Next, we will define the message handling function with the @cl.on_message decorator. This function is responsible for processing user messages and generating responses:

@cl.on_message
async def query_llm(message: cl.Message):
llm_chain = cl.user_session.get("llm_chain")

response = await llm_chain.acall(message.content,
callbacks=[
cl.AsyncLangchainCallbackHandler()])

await cl.Message(response["text"]).send()

In the code above, we retrieve the previously stored llm_chain object from the user session. This object holds the state and configuration needed to interact with the language model. The llm_chain.acall method is then called with the content of the incoming message. This method sends the message to the LLM, incorporating any necessary context from the conversation history, and awaits the response. Once the response from the LLM is received, it’s formatted into a cl.Message object and sent back to the user.

Step 3

With the application code ready, it’s time to launch our chatbot. Open a terminal in your project directory and run the following command:

chainlit run chatbot.py -w --port 8000

The -w flag tells the application to automatically reload when we make some changes to our application code. You can access the chatbot by navigating to http://localhost:8000 in your web browser.

Upon launching, you will get the default view of the application:

Default View of the Chatbot Application Upon Launch

Step 4

We can make changes to the welcome screen by modifying the chainlit.md file at the root of our project. If you do not want a welcome screen, you can leave this file empty. Let's go ahead and add a description relevant to our chatbot.

# 🍨 Welcome to Scoopsie! 🍦

Hi there! 👋 I am Scoopsie and I am built to assist you with all
your ice-cream related queries. You can begin by asking me anything
related to ice-creams in the chatbox below.

We can also make changes to the app’s appearance such as changing the theme colors by making changes to the config.toml file. You can find this file in the .chainlit folder inside the project. For now, I will just go with the defaults. The app also supports toggling between light and dark mode directly from the frontend:

How to Toggle Between Light and Dark Mode in Your Chainlit Application

Demo

Scoopsie’s application interface is now ready! Here is a demo showcasing the chatbot in action:

Scoopsie Chatbot Demo: Interactive Ice-Cream Assistant in Action

Next Steps

Our custom chatbot’s application interface is all set up. In the next tutorial, we will be focusing on integrating an external API with our chatbot, as this can be a useful feature in several enterprise-level applications.

You can find the code for this tutorial in this GitHub repo. The GitHub checkpoint for this tutorial will contain all developed code up until this point.

You can follow along as I share working demos, explanations and cool side projects on things in the AI space. Come say hi on LinkedIn! 👋

--

--

Tahreem Rasul

ML Engineer. I am interested in talking about all things in the AI space, specifically language and vision models. https://linktr.ee/tahreemrasul