Streaming respose from ChatGPT in Python.

alex buzunov
CodeX
Published in
2 min readMay 9, 2024

If you want to improve interaction in you Python OpenAI LLM bot you can stream message chunks from LLM client instead of waiting.

Non-streaming code example.

Here’s some code using langchain. It will wait for full respoce before printing it out.

# Initialise the Large Language Model
llm = ChatOpenAI(
openai_api_key=os.getenv("OPENAI_API_KEY"),
temperature=1,
model_name='gpt-4'
)
# Initialise the conversation chain
conversation_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=memory
)
# Prompt the LLM chain
response = conversation_chain.run({"question": text})

Streaming code

Here’s simpler code. It will print out response as it arrives in the pipe.

import openai
import os
from dotenv import load_dotenv
load_dotenv()

import openai

# Set your OpenAI API key here
openai.api_key = os.getenv("OPENAI_API_KEY")

# Initialize the client
client = openai.OpenAI()

def stream_response(prompt):
# Create a chat completion request with streaming enabled
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a chatbot that assists with Apache Spark queries."},
{"role": "user", "content": prompt}
],
stream=True
)

# Print each response chunk as it arrives
for chunk in response:
if hasattr(chunk.choices[0].delta, 'content'):
content = chunk.choices[0].delta.content
print(content, end='', flush=True)

if __name__ == "__main__":
stream_response("Hey, what are new features of Apache Spark?")

Live response

(myenv) (base) C:\Users\alex_\aivoice\interview_helper>python simple_stream.py
Apache Spark continually evolves with new updates. As of Spark 3.0, some of the newest features and improvements include:

1. Adaptive Query Execution (AQE): AQE changes query execution plan dynamically at runtime, based on the actual data statistics.

2. Dynamic Partition Pruning: This feature increases efficiency by only reading those partitions from the table that are required, rather than scanning the entire table.

3. ANSI SQL Parser: The new ANSI SQL parser aligns Spark SQL with the standard SQL and provides clear error prompts.

4. Significant performance improvements: These are seen across the software, but most significantly in Spark SQL and DataFrame/Dataset APIs.

5. Enhancements and new features in MLlib, Spark’s machine learning library: This includes model selection enhancements and better Python usability.

6. Better Kubernetes Integration: The driver and executor pod scheduling and configuration process has been optimized for better usability with Kubernetes.

7. Binary files data source: This allows users to read binary files, and includes commonly used image data.

8. Hadoop 3 support: Spark now directly supports Apache Hadoop 3, including new connectors.

9. Eliminating Barriers to Python development: With Project Zen, Spark aims to make Python first-class citizen like Scala for Spark development.

10. Improvements in handling nested data types: This reduces the complexity of schemas.

These features might change or get updated with each new release of Apache Spark. Always check the official documentation for the most accurate and recent details.

Source

https://github.com/pydemo/interview_helper/blob/main/simple_stream.py

Next step: wxPython rewrite

--

--