Create a custom AI chatbot powered by the OpenAI ChatGPT model

Khalid IBNELBACHYR
Crayon Data & AI
Published in
7 min readJun 3, 2023

ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response. Since its release to the general public in November 2022, it has been used for so many use cases.

Additionally, since the General Availability of Azure OpenAI Service, the very same models are also available as a service in the Microsoft Azure, benefiting from scalability, flexibility, security, and built-in responsible AI.

In this article, I am going to show how Azure OpenAI models’ API can be embedded in a web app that simulates a chatbot.

I am using a Python app, built using the Streamlit framework, to allow users to have interactive conversations with an AI assistant named Khalid.

import os
import streamlit as st
import openai

def authenticate(username, password):
if username == 'khalid' and password == 'your_password123@!':
return True
else:
return False

# Setting page title, header, and sidebar
st.set_page_config(page_title="Custom ChatGPT", page_icon="💬")
st.markdown("<h1 style='text-align: center;'>Khalid - a custom ChatGPT 😬</h1>", unsafe_allow_html=True)
st.markdown("Welcome to the custom ChatGPT app. Please log in to continue.")

# Configure Azure OpenAI API
openai.api_type = "azure"
openai.api_base = os.getenv("OPENAI_API_BASE")
openai.api_version = "2023-03-15-preview"
openai.api_key = os.getenv("OPENAI_API_KEY")

# Initialize session state variables
if 'is_logged_in' not in st.session_state:
st.session_state['is_logged_in'] = False
if 'generated' not in st.session_state:
st.session_state['generated'] = []
if 'past' not in st.session_state:
st.session_state['past'] = []
if 'messages' not in st.session_state:
st.session_state['messages'] = [
{"role": "system", "content": "You are Khalid, a helpful assistant."}
]
if 'model_name' not in st.session_state:
st.session_state['model_name'] = []

# Sidebar - let user choose model, show total cost of current conversation, and let user clear the current conversation
st.sidebar.title("Your Custom ChatGPT")

username = st.sidebar.text_input("Username")
password = st.sidebar.text_input("Password", type="password")

if st.sidebar.button("Login"):
if authenticate(username, password):
st.session_state['is_logged_in'] = True
st.sidebar.success("Logged in successfully")

model_name = st.sidebar.radio("Choose a model:", ("GPT-3.5", "GPT-4"))
counter_placeholder = st.sidebar.empty()
clear_button = st.sidebar.button("Clear Conversation", key="clear")

# Map model names to OpenAI model IDs
if model_name == "GPT-3.5":
model = "gpt-35-turbo"
else:
model = "gpt-4"

# Reset everything
if clear_button:
st.session_state['generated'] = []
st.session_state['past'] = []
st.session_state['messages'] = [
{"role": "system", "content": "You are Khalid, a helpful assistant."}
]
st.session_state['model_name'] = []

# Generate a response
def generate_response(prompt):
st.session_state['messages'].append({"role": "user", "content": prompt})

response = openai.ChatCompletion.create(
engine="chatgptapp",
messages=st.session_state['messages'],
temperature=0.7,
max_tokens=800,
top_p=0.95,
frequency_penalty=0,
presence_penalty=0,
stop=None)

reply = response.choices[0].message.content
st.session_state['messages'].append({"role": "assistant", "content": reply})

return reply

if st.session_state['is_logged_in']:
user_input = st.text_area("You:")
if user_input:
output = generate_response(user_input)
st.session_state['past'].append(user_input)
st.session_state['generated'].append(output)
st.session_state['model_name'].append(model_name)

if st.session_state['generated']:
for i in range(len(st.session_state['generated'])):
st.text(f"You: {st.session_state['past'][i]}")
st.text(f"Khalid: {st.session_state['generated'][i]}")
st.write(f"Model used: {st.session_state['model_name'][i]}")

Here’s a breakdown of the code:

  1. The authenticate function takes a username and password as input and checks if they match a predefined username and password. If the authentication is successful, it returns True; otherwise, it returns False.
  2. The app’s UI is configured using Streamlit. The page title, header, and sidebar are set, and a login form is displayed.
  3. The Azure OpenAI API is configured by setting the API type, base URL, version, and API key.
  4. Session state variables are initialized to store the conversation history, generated responses, and other information.
  5. The sidebar allows users to log in by entering their username and password. Upon successful login, the session state is updated, and a success message is displayed.
  6. Users can choose the model they want to use (GPT-3.5 or GPT-4) from the sidebar. The model name is mapped to the corresponding OpenAI model ID.
  7. The sidebar also provides options to clear the conversation history.
  8. The generate_response function generates a response from the AI assistant. It appends the user's input to the conversation history and makes an API call to the OpenAI API using the openai.ChatCompletion.create method. The generated response is appended to the conversation history as well.
  9. If the user is logged in, a text area is displayed where they can enter their input. Upon submitting the input, a response is generated using the generate_response function, and the conversation history is updated accordingly.
  10. If there are generated responses, the conversation history is displayed, showing the user’s input, Khalid’s response, and the model used for each interaction.

Overall, this app allows users to have conversational interactions with the AI assistant Khalid, powered by the OpenAI ChatGPT model.

Deploying the custom ChatGPT

Technical Requirements :

This app has dependencies. I will put them in a requirements.txt file:

azure-identity
streamlit==1.22.0
streamlit-chat
openai==0.27.7

I use Docker to encapsulate the entire application stack, including dependencies, services, and configurations.

We can start writing our Dockerfile:

FROM python:3.9-slim

# Upgrade pip and install requirements
COPY requirements.txt requirements.txt
RUN pip install -U pip
RUN pip install -r requirements.txt

# Copy app code and set working directory
WORKDIR /app
COPY . .

ENV OPENAI_API_KEY=$OPENAI_API_KEY
ENV OPENAI_API_BASE=$OPENAI_API_BASE

EXPOSE 8501

# Run
ENTRYPOINT streamlit run app.py --server.port=8501 --server.address=0.0.0.0

I won’t give the API secrets directly into the Dockerfile. I will make use of the docker compose YAML file.

Let’s build the docker image.

docker build -t chatgptapp .

I need to create a Resource Group in Azure that keeps all the resources bound together. Create a Resource Group with Azure CLI:

az group create --name rg-chatgptapp --location francecentral

I also need to create an Azure OpenAI resource and deploy a model :

  1. Navigate to the create page: Azure OpenAI Service Create Page
  2. On the Create page provide the following information:

Before I can generate text or inference, I need to deploy a model. To deploy a model, I have to follow these steps:

  1. Sign in to Azure OpenAI Studio.
  2. Select the subscription and OpenAI resource to work with.
  3. Select Manage deployments in your resource > Go to Deployments under Manage your deployments and models. You might first need to scroll down on the landing page.

Select Create new deployment from the Management > Deployments page. Select a model from the drop-down. Some models are not available in all regions. For a list of available models per region, see Model Summary table and region availability.

  • Enter a model name to help you identify the model. Choose a name carefully. The model name will be used as the deployment name via OpenAI client libraries and API.
  • Select Create to deploy the model.

Get the API ENDPOINT and API KEY:

Then I need to create a Container Registry with Azure CLI:

az acr create --resource-group rg-chatgptapp --name acrchatgptapp --sku Basic

To push the docker image to the Container Registry (ACR), I need to tag the Docker image first and push it to the right registry after login :

az acr login --name acrchatgptapp
docker tag chatgptapp acrchatgptapp.azurecr.io/chatgptapp
docker push acrchatgptapp.azurecr.io/chatgptapp

Now I will write the Docker compose file, that I will use to deploy and orchestrate the app (For now I will use only one container) :

services:
streamlit:
image: *******.azurecr.io/chatgptapp
environment:
OPENAI_API_BASE: https://******.openai.azure.com/
OPENAI_API_KEY: 0fea************9fba
ports:
- ‘80:8501'

Create an App Service Plan with Azure CLI:

Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.

az appservice plan create --name plan-chatgptapp --resource-group rg-chatgptapp --sku S1 --is-linux

Create a Web App with Azure CLI:

az webapp create --resource-group rg-chatgptapp --plan plan-chatgptapp --name chatgptapp --multicontainer-config-type compose --multicontainer-config-file docker-compose.yml

Configure the Web App to get the image from the ACR:

(I used Admin Credentials but you could use Managed Identity as well)

Here is a summary of the resources I created in Azure:

From the page of the newly created web service in the Azure Portal, I can get the URL to access the application. Let’s see if it works:

Using the Azure OpenAI models’ API and Streamlit framework, we can create a custom ChatGPT. In this article, I am demonstrating the power of the OpenAI models’ API in adding intelligence and user-oriented features to any application. With just a few lines of code, I created a seamless conversational experience, showcasing the potential of AI-powered interactions in various applications.

References:

--

--