Building an Intelligent Chatbot with OpenAI and Streamlit

Dhruv (Drew) Sawhney
5 min readFeb 24, 2024

--

In the era of digital transformation, chatbots have become an indispensable tool for enhancing user engagement and providing instant support. Leveraging the power of OpenAI’s GPT models, I embarked on creating a chatbot that not only interacts with users but also ensures their privacy by filtering out sensitive information. This article delves into the step-by-step process of building this chatbot using Python, OpenAI’s API, and Streamlit for the web interface. Furthermore, I’ll guide you through generating an OpenAI API key, a crucial step in integrating OpenAI’s capabilities into your projects.

Setting the Stage with Streamlit

Streamlit, a fast and simple way to build web apps for your data projects, serves as the foundation of our chatbot. The initial code snippet sets up the Streamlit interface, welcoming users with a title:

import streamlit as st

st.title("WELCOME TO MY CHATBOT... 🧑‍💻💬 ")
"""
Note: This chatbot will give a warning if you put any sensitive information
"""

This creates a user-friendly interface with a welcoming message and a cautionary note about sharing sensitive information, ensuring users are aware of the chatbot’s privacy guidelines.

The Core: OpenAI Integration

To harness the power of GPT-3.5, we integrate OpenAI’s API into our chatbot. This requires installing the openai library and setting your OpenAI API key:

import openai

openai_api_key = 'Your Open AI Key Here'
Replace 'Your Open AI Key Here' with your actual OpenAI API key. This key is essential for authenticating your requests to OpenAI's services.

Filtering Sensitive Information

One of the unique features of this chatbot is its ability to filter out sensitive information from user inputs, using the following function:

def check_for_personal_info(prompt, openai_api_key):
openai.api_key = openai_api_key
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[...]
)
ai_response = response['choices'][0]['message']['content'].strip()
return ai_response
except Exception as e:
print(f"An error occurred: {e}")
return None

This function sends the user’s input to the GPT-3.5 model, asking it to classify whether the message contains personal information. It’s a safeguard to protect users’ privacy and ensure the chatbot’s responsible use.

Conversational Flow Management

Streamlit’s session state is utilized to manage the conversational context, storing messages as the conversation progresses:

if "messages" not in st.session_state:
st.session_state["messages"] = [{"role": "system", "content": "You are an AI assistant."}]

This allows the chatbot to maintain a continuous and context-aware dialogue with the user, enhancing the conversational experience.

Handling User Inputs and Responses

The main logic of the chatbot handles user inputs, checks for sensitive information, and generates responses using OpenAI’s model:

if prompt := st.chat_input():
if check_for_personal_info(prompt, openai_api_key) == '1':
st.warning("Warning: Please do not share personal information.")
else:
...
response = openai.ChatCompletion.create(model="gpt-3.5-turbo-0613", messages=st.session_state.messages)
...

After each user input, the chatbot checks for personal information. If none is found, it proceeds to generate a response, keeping the conversation flowing naturally.

Generating Your OpenAI API Key

To use OpenAI’s API, you’ll need an API key. Follow these steps to generate one:

  1. Sign Up or Log In: Visit OpenAI’s website and sign up or log in.
  2. Access API Keys: Navigate to the API section in your dashboard and select “API keys.”
  3. Create a New API Key: Click on “Create new API key,” name it for easy identification, and copy the generated key.

Keep your API key confidential and secure. Embedding it directly in your code, especially when sharing publicly, is not recommended. Instead, use environment variables or secure vaults to store and access your API keys.

import numpy as np
import pandas as pd
import openai
import streamlit as st


openai_api_key = 'Your Open AI Key Here'

st.title("WELCOME TO MY CHATBOT... 🧑‍💻💬 ")
"""
Note: This chatbot will give a warning if you put any sensitive information
"""

# Function to check for personal information using OpenAI's content filter
def check_for_personal_info(prompt,openai_api_key):
openai.api_key = openai_api_key
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You're a highly intelligent assistant. Your task is to determine if the following message contains any personal information such as names, email addresses, phone numbers, or any other details that could be used to identify an individual.Please Classify it only as 1 or 0 where 1 means it contains personal information and 0 means it doesnt contain any personal information"},
{"role": "user", "content": prompt}
]
)

# The response from GPT-3.5-turbo will be in the `choices` list, usually with detailed information.
# You need to interpret the response to decide if it indicates personal information is present.
# This might require parsing the text of the response for specific keywords or phrases.
ai_response = response['choices'][0]['message']['content'].strip() # Adjusted based on correct API usage
return ai_response
except Exception as e:
print(f"An error occurred: {e}")
return None

if "messages" not in st.session_state:
st.session_state["messages"] = [
{"role": "system", "content": "You are an AI assistant."}
]

# Display previous messages
for message in st.session_state["messages"]:
# Check the role to determine how to display the message
if message["role"] == "user":
st.chat_message("user").write(message["content"])
elif message["role"] == "assistant":
st.chat_message("assistant").write(message["content"])
else: # Default case, for system messages or any other type
st.text(message["content"]) # Using st.text for system messages or other roles

if prompt := st.chat_input():
openai.api_key = openai_api_key
if check_for_personal_info(prompt,openai_api_key)=='1':
st.warning("Warning: Please do not share personal information.")
else:
st.session_state.messages.append({"role": "user", "content": prompt})

# Assuming this function call is correctly displaying the user's message in your setup
st.chat_message("user").write(prompt)

response = openai.ChatCompletion.create(model="gpt-3.5-turbo-0613", messages=st.session_state.messages)
msg = response.choices[0].message
st.session_state.messages.append({"role": "assistant", "content": msg.content})

# Display the assistant's response immediately after getting it
st.chat_message("assistant").write(msg.content)

if st.button('Clear Conversation'):
st.session_state.messages = [
{"role": "system", "content": ""}
]
st.experimental_rerun()

Conclusion

Building a chatbot with OpenAI and Streamlit is a journey through cutting-edge technologies and best practices in AI ethics and privacy. By integrating OpenAI’s powerful GPT-3.5 model into a Streamlit web app, we’ve created a chatbot that’s not only intelligent but also respects user privacy. Whether you’re a developer, a data scientist, or just an AI enthusiast, this project offers a glimpse into the potential of AI to transform our digital interactions.

Remember, the key to leveraging OpenAI’s capabilities lies in responsible usage and safeguarding user data. As you embark on your own AI projects, keep exploring, learning, and innovating with these tools at your disposal.

--

--