Develop a UI for Azure Prompt Flow with Streamlit

Shahzeb Naveed
The Deep Hub
Published in
4 min readApr 2, 2024
Azure meets Streamlit. (Image by Author — logos are the property of Microsoft and Streamlit)

Recently I was exploring Azure ML’s Prompt Flow feature and experimented with the official “Chat with Wiki” demo. I deployed it to an endpoint and used the Azure ML’s Test option which provides a nice interface to chat with the bot.

Azure ML’s Test feature provides a nice UI to test prompt flow deployments.

Then, as it came to the UI, I arrived at this realization that there’s no off-the-shelf open-source (or even commercially available) UI project where I can just plug in my endpoint URL and be done with it. So, I made this quick template app that can serve as a quickstart if you’d like to develop a nice ChatGPT-like user-interface.

In this article, I’ll go over the steps of how to use Streamlit’s chat elements to give a face to your Prompt Flow deployment.

For complete code, checkout this repo.

Start with importing the required modules and environment variables. Store your Prompt Flow’s Key and save it as AZURE_ENDPOINT_KEY in a .env file.

import streamlit as st
import urllib.request
import json
import os
import ssl
from dotenv import load_dotenv

# Load environment variables
load_dotenv()
AZURE_ENDPOINT_KEY = os.environ['AZURE_ENDPOINT_KEY']

Add functionality to allow for bypassing verification of self-signed certificates on the client side.

def allowSelfSignedHttps(allowed):
# bypass the server certificate verification on the client side
if allowed and not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context

Enter main():

In the main function, call allowSelfSignedHttps() and give a nice title:

allowSelfSignedHttps(True)
st.title("Azure Prompt Flow Chat Interface")

Two weird encounters I had with Prompt Flow:

  1. One weird and important thing to note is that while developing your Prompt Flow, you may not be able to edit the chat_history variable (at least not in the “Chat with Wiki” demo). “Chat history value is not editable” is what it says in the info box.
  2. Furthermore, if you query your endpoint without specifying chat_history, it makes use of the one used while developing by default. (I found this to be very unusual but please let me know in comments if you have an explanation for this.)

Below, I initialize the chat_history and render any existing messages that may exist.

    # Initialize chat history
if "chat_history" not in st.session_state:
st.session_state.chat_history = []

# Display chat history
for interaction in st.session_state.chat_history:
if interaction["inputs"]["question"]:
with st.chat_message("user"):
st.write(interaction["inputs"]["question"])
if interaction["outputs"]["answer"]:
with st.chat_message("assistant"):
st.write(interaction["outputs"]["answer"])

If you’re new to Streamlit like me, you should first understand how Streamlit actually works. Streamlit re-runs the entire script on every user interaction. This means, you need a way to store the chat_history for the duration of the user session. For this purpose, we are using Streamlit’s session states as can be seen in the code.

Now, as soon as the user interacts with the chat_input input bar, we’ll render user’s message and hit our Azure deployment’s endpoint URL with our authentication key and payload named data which comprises of our chat_history and user input keyed as question.

# React to user input
if user_input := st.chat_input("Ask me anything..."):

# Display user message in chat message container
st.chat_message("user").markdown(user_input)

# Query API
data = {"chat_history": st.session_state.chat_history, 'question' : user_input}
body = json.dumps(data).encode('utf-8')
url = 'https://shahml-hhrub.eastus.inference.ml.azure.com/score'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Bearer {AZURE_ENDPOINT_KEY}',
'azureml-model-deployment': 'shahml-hhrub-1'
}
req = urllib.request.Request(url, body, headers)

Now, within the if statement above, we’ll render the API’s response and store it in our chat_history .

try:
response = urllib.request.urlopen(req)
response_data = json.loads(response.read().decode('utf-8'))

# render
with st.chat_message("assistant"):
st.markdown(response_data['answer'])

# add user input to chat history
st.session_state.chat_history.append(
{"inputs": {"question": user_input},
"outputs": {"answer": response_data['answer']}}
)

except urllib.error.HTTPError as error:
st.error(f"The request failed with status code: {error.code}")
st.text(error.read().decode("utf8", 'ignore'))

That’s it! You can now run your app using streamlit run app.py and start chatting with your prompt flow on http://localhost:8501/.

The complete code can be found on my repo.

Chat Interface using Streamlit

That’s all folks!

Credits:

The code above is directly adapted from Microsoft’s endpoint Python consumption sample and Streamlit’s official documentation.

--

--