Web ChatGPT chatbot using Langchain and Streamlit

Petar Joncheski
11 min readAug 9, 2024

--

As you might have assumed the image is generated with the help of ChatGpt 🙂

Welcome to the third part of our blog series on building and enhancing a ChatGPT chatbot with Langchain. If you haven’t read the first and second blog posts, make sure to check them out before proceeding with this one, as they lay the foundation for the chatbot we’re extending here.

In this tutorial, we will add a graphical user interface (GUI) to our chatbot using Streamlit. This enhancement will make our chatbot more user-friendly and visually appealing.

Disclaimer: This blog post has been enhanced with the assistance of AI. The code, text structure, and technical explanations are entirely my own, with AI helping to expand and refine the content.

Responsible use of AI for Productivity

I’m a strong believer in responsibly embracing AI-assisted technology to boost productivity. As a husband and a parent, my life is a 24/7 commitment that never stops. I’m also passionate about staying active, enjoying activities like hiking, running, boxing, skiing, motorcycle driving, learning and trying out new technologies, sharing my knowledge trough blog posts and tutorials, tackling DIY projects around the house, and of course, playing hide and seek with my daughter!

These pursuits keep my schedule packed and without the assistance of AI to enhance and enrich the quality of my blog posts, it would be challenging to share my knowledge as frequently as I’d like.

I value honesty and transparency, which is why I want to clearly explain how and why I use AI in creating the content for this blog post.

Now, let’s roll up our sleeves and dive into the content of this blog post! Let’s go !

What is Streamlit?

Streamlit is an open-source app framework that makes it easy to create and share beautiful, custom web apps for machine learning and data science. It’s particularly useful for AI applications because it allows developers to quickly build interactive interfaces without requiring extensive front-end development skills.

Installing Dependencies

Before we dive into the code, we need to ensure that all the necessary dependencies are installed. Here are the steps to do so:

1. Install the necessary packages:

pip install langchain-openai streamlit streamlit_chat python-dotenv

3. Ensure you have your OpenAI API key stored in a .env file:

OPENAI_API_KEY=your_openai_api_key_here

If the steps above don’t work for you please go back to the first blog post where there are detailed steps on how to install pip, get an OpenAI Api Key and install the dependencies.

Creating the script

Lets first create the web_chatgbt_chatbot_with_memory.py python file by right click on the root directory-> New ->Python File

Name the file “web_chatgbt_chatbot_with_memory”. The file should have “.py” at the end of the filename.

Add the following line in the script

print("Hello!")

Run the Script from the in PyCharm by right-click the Python file containing your code and select “Run ‘web_chatgbt_chatbot_with_memory.py”.

You can also run the Script from Terminal by Openining Terminal and navigate to the “web_chatgbt_chatbot_with_memory.py” file. Run the following command “python web_chatgbt_chatbot_with_memory.py"

Note: Running the app via Terminal requires to have python installed on your machine. Installing python on your local machine is outside of the scope of this tutorial and there are plenty resources available online.

You should see an output “Hello!” in the Terminal window. Great job!

Importing the Necessary Libraries

To start, we need to import several new libraries. Each of these imports serves a specific purpose in our chatbot application:

from langchain_openai import ChatOpenAI
from langchain.schema import SystemMessage, HumanMessage, AIMessage

import streamlit as st
from streamlit_chat import message

from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv(), override=True)

from langchain.memory import ConversationBufferMemory
from langchain_community.chat_message_histories import FileChatMessageHistory

from langchain.globals import set_verbose

Deep dive into the imports

from langchain_openai import ChatOpenAI: This import brings in the ChatOpenAI class, which allows us to create a chat model using OpenAI's API. It’s the core component that powers the chatbot's conversational abilities.

from langchain.schema import SystemMessage, HumanMessage, AIMessage: These imports provide message schemas for structuring different types of messages:

  • SystemMessage: Used for system-level messages that set the context or provide instructions.
  • HumanMessage: Represents messages from the user.
  • AIMessage: Represents responses from the AI.

import streamlit as st: This import brings in Streamlit, the framework we use to create the web interface for our chatbot.

from streamlit_chat import message: This import allows us to use the message function to display chat messages in a user-friendly format within the Streamlit app.

from dotenv import load_dotenv, find_dotenv: These functions help us load environment variables from a .env file, ensuring our API keys and other sensitive information are securely managed.

from langchain.memory import ConversationBufferMemory: This import allows us to manage chat history in memory, enabling the chatbot to remember past interactions.

from langchain_community.chat_message_histories import FileChatMessageHistory: This import provides functionality to persist chat history to a file, ensuring conversations are stored across sessions.

from langchain.globals import set_verbose: This function controls the logging verbosity, which is useful for debugging.

Loading the environment variables using load_dotenv(find_dotenv(), override=True) ensures that our OpenAI API key and other settings are correctly configured, much like checking that we have packed our passport and tickets for the journey.

Run the script to make sure it compiles and all the dependencies are installed correctly.

Setting Up Memory Management: Remembering the Conversations

As we travel further, we realize the importance of remembering where we’ve been. Our chatbot needs to remember past conversations to provide relevant responses. Let’s break this down step-by-step.

Enabling Verbose Logging(Optional)

First, we enable verbose logging to help us debug more effectively. This is like turning on the lights in a workshop to see clearly what we’re doing.

set_verbose(True)

Setting set_verbose(True) allows us to see detailed logs of what’s happening behind the scenes. This is especially useful when debugging issues or trying to understand the flow of data through our chatbot.

Creating a File to save the Chat History

Next, we create an instance of FileChatMessageHistory to manage chat history, saving it to a file named .chat_history.json. This file stores all past interactions, ensuring that the chatbot retains context across sessions.

history = FileChatMessageHistory('.chat_history.json')

Here, FileChatMessageHistory('.chat_history.json') creates a file named .chat_history.json in the current directory. If the file already exists it won’t be created again and the content of the previous chat will be loaded. This file acts as our chatbot’s diary, recording all conversations so that the bot can refer back to previous interactions.

Setting Up Conversation Buffer Memory

Finally, we configure the memory management system using ConversationBufferMemory. This ensures that our chatbot can store and retrieve messages efficiently.

memory: ConversationBufferMemory = ConversationBufferMemory(
memory_key='chat_history',
chat_memory=history,
return_messages=True
)

The ConversationBufferMemory instance uses the history object for storing and retrieving chat messages. The memory_key parameter identifies this memory instance, and return_messages=True ensures that messages are returned in their original format, allowing our chatbot to access and utilize past conversations seamlessly.

Configuring Streamlit Page: Setting the Scene

Now, we need to set the scene for our chatbot’s grand performance. We configure the Streamlit page settings to enhance the user interface. We set the page title and icon using st.set_page_config(), and add a subheader to the page with st.subheader().

st.set_page_config(
page_title='Your Custom Assistant',
page_icon='👽'
)
st.subheader('Your Custom ChatGPT')

This configuration is akin to setting up the stage with a catchy title and a welcoming banner, making sure everything looks professional and inviting.

Lets run the app to make sure what we have so far is working properly. Ensure you have followed the steps from the previous blog posts and installed all necessary dependencies. Execute the script in the terminal with:

streamlit run web_chatgbt_chatbot_with_memory.py

After you executed the ine above in the console you should also see the Local URL and Network URL.

If you type the Local URL which in my case is http://localhost:8501 in your local web browser, you should now see a simple web interface as in the image below.

Hooray!! It may not look like much, but we have just started. We are not done!

Initializing the Chat Model: The Brain of Our Chatbot

Now, we need to wake up our chatbot’s brain. We initialize the ChatGPT model using the ChatOpenAI class, setting parameters such as the model name and temperature to control the response variability.

chat = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0.5)

The temperature parameter controls the creativity of the responses, much like setting the tone for a conversation. We will use 0.5 for this tutorial but feel free to adjust it and experiment it with different temperatures.

Managing Session State: Keeping Track of Conversations

To ensure our chatbot can handle ongoing conversations seamlessly, we manage the session state effectively. We check if the messages list exists in st.session_state. If it does not, we initialize it with messages from the memory.

if 'messages' not in st.session_state:
st.session_state.messages = memory.chat_memory.messages

This ensures that the chat history is preserved across different interactions, much like a stage manager keeping track of the script during a performance.

Sidebar Input: Engaging with the Audience

The sidebar is where our users interact with the chatbot. We use Streamlit’s sidebar to capture these inputs. The system message input sets the context for the chatbot, while the user prompt captures the user’s question or command.

Disclaimer 🙂: I am a type of person that would like to have the entire code up front and see if can understand the code and play with it. I have structured the blog post in this way. If you are not like me 🙂 feel free to skip reading the full code snippets and visit the detailed break down of the code

with st.sidebar:
system_message_input = st.text_input(label="System role", value=system_message.content if system_message else "")
user_prompt = st.text_input(label="Send a message")
if not system_message and system_message_input:
system_message = SystemMessage(content=system_message_input)
st.session_state.messages.append(system_message)
memory.chat_memory.add_message(system_message)
if user_prompt:
st.session_state.messages.append(
HumanMessage(content=user_prompt)
)
memory.chat_memory.add_message(HumanMessage(content=user_prompt))
with st.spinner('Working on your request'):
response = chat.invoke(st.session_state.messages)
st.session_state.messages.append(AIMessage(content=response.content))
memory.chat_memory.add_message(AIMessage(content=response.content)

Let’s break this down further:

System Role Input

This input sets the context for the chatbot. If there isn’t already a system message, and the user provides one, it gets added to the session state and memory.

system_message_input = st.text_input(label="System role", value=system_message.content if system_message else "")

If the system message is restored from the memory then will be added in the input field with the following part of the code: value=system_message.content if system_message else “”

User Prompt Input

This captures the user’s question or command. When the user submits a prompt, it’s appended to the session state and memory.

user_prompt = st.text_input(label="Send a message")

Processing the User Input

If the user provides a prompt, we display a spinner while the chatbot processes the input. The response is then appended to the session state and memory.

if user_prompt:
st.session_state.messages.append(
HumanMessage(content=user_prompt)
)
memory.chat_memory.add_message(HumanMessage(content=user_prompt))
with st.spinner('Working on your request'):
response = chat.invoke(st.session_state.messages)
st.session_state.messages.append(AIMessage(content=response.content))
memory.chat_memory.add_message(AIMessage(content=response.content))

Here’s a detailed explanation of each line in the if user_prompt: block:

  1. Appending the User’s Message:
st.session_state.messages.append( HumanMessage(content=user_prompt) )

This line takes the user’s input (captured in user_prompt) and creates a HumanMessage object. It then appends this message to the messages list in the session state. This keeps a record of what the user has said in this chat session.

2. Adding the User’s Message to Memory:

memory.chat_memory.add_message(HumanMessage(content=user_prompt))

After adding the message to the session state, this line adds the same HumanMessage object to the memory managed by ConversationBufferMemory. This ensures the message is stored persistently and can be accessed in future sessions.

3. Showing a Spinner While Processing:

with st.spinner('Working on your request'):

This line starts a spinner (a loading indicator) to show that the chatbot is working on the user’s request. It provides feedback to the user that their input is being processed.

3. Generating the AI’s Response:

response = chat.invoke(st.session_state.messages)

This line sends the list of messages (including the new user message) to the ChatOpenAI model to generate a response. The result is stored in the response variable.

4. Appending the AI’s Response to the Session State:

st.session_state.messages.append(AIMessage(content=response.content))

This line takes the content of the response generated by the AI and creates an AIMessage object. It then appends this message to the messages list in the session state, keeping a record of the AI’s response.

5. Adding the AI’s Response to Memory:

memory.chat_memory.add_message(AIMessage(content=response.content))

Finally, this line adds the AIMessage object to the memory managed by ConversationBufferMemory. This ensures the AI’s response is stored persistently and can be accessed in future sessions.

Displaying Messages

Finally, we display the chat messages in the main interface. We iterate through the messages in the session state, distinguishing between human and AI messages to display them appropriately.

if len(st.session_state.messages) > 0:
for i, msg in enumerate(st.session_state.messages[1:]):
if isinstance(msg, HumanMessage):
message(msg.content, key=f'{i}+ HM', is_user=True)
if isinstance(msg, AIMessage):
message(msg.content, key=f'{i}+ AIM', is_user=False)

Here’s a deeper look at this part:

  • Iterating Through Messages: We loop through the messages in the session state, starting from the second message (index 1) to skip the initial system message.
for i, msg in enumerate(st.session_state.messages[1:]):
  • Displaying Human Messages: For each human message, we use the message function from streamlit_chat to display it, marking it as a user message.
if isinstance(msg, HumanMessage):
message(msg.content, key=f'{i}+ HM', is_user=True)
  • Displaying AI Messages: For each AI message, we also use the message function but mark it as not a user message.
if isinstance(msg, AIMessage):
message(msg.content, key=f'{i}+ AIM', is_user=False)

Using the message function from streamlit_chat, we ensure that each message is displayed with the correct styling, indicating whether it was sent by the user or the AI. This distinction helps users easily follow the conversation and understand who said what.

Running the App: Showtime

At this point, we’ve set up our chatbot’s memory management, configured the Streamlit page, initialized the chat model, managed session state, added sidebar input, and set up message display. Now, it’s time to see our chatbot in action. Lets run the app using the same command from previosly:

streamlit run web_chatgbt_chatbot_with_memory.py

You should now see a web interface for interacting with your ChatGPT bot. The interface allows you to set the system role, send messages, and view the conversation history.

Time to lean Spanish with our chatbot 🇪🇸

All right! Ready to learn some Spanish ?🇪🇸 In order to get answers in Spanish lets enter “Answer in Spanish” in the “System Role” input field.

Lets ask the chatbot to tell us about Germany by entering “Tell me about Germany” in the Send a message input field. You should get an answer in Spanish that is a summary about Germany and you should see an output similar answer to the one shown in the image below

Awesome! Great job!

Lets ask another question. Since the chatbot has context of the previous messages we can ask follow up questions from the same topic like “What is the capital?”. You should get a new message in chat Spanish answering that Berlin is the capital of Germany and you should see an output similar answer to the one shown in the image below:

If you close the stop the app and start it again, you should see the “System role” input all of your messages pre-populated in the chat.

Great job! Continue to experiment with different system roles and messages and see which results you will get.

Conclusion

In this tutorial, we’ve successfully added a graphical user interface (GUI) to our ChatGPT bot using Streamlit. This enhancement makes the bot more user-friendly and visually appealing. With the new interface, users can interact with the chatbot more intuitively, and the bot can remember past interactions, making the conversation more coherent and engaging.

You can find the full code by following this link.

If you think this article was helpful, be sure to give it 👏👏👏. Also, you can follow me on Medium for more updates!

--

--