Tiny LLM hacks: wxPython ChatGPT chat app.
Published in
2 min readMay 10, 2024
Building up on this I used streaming functionality to of gpt-4 API to make interactive app.
wxPython
With help of Github copilot and ChatGPT after 2 hours it was up and running.
It uses streaming API for gpt-4
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a chatbot that assists with Apache Spark queries."},
{"role": "user", "content": prompt}
],
stream=True
)
So i see the answer starts within seconds
# Print each response chunk as it arrives
print("Streaming response:")
for chunk in response:
if hasattr(chunk.choices[0].delta, 'content'):
content = chunk.choices[0].delta.content
print(content, end='', flush=True)
#out.append(content)
output_ctrl.AppendText(content)
await asyncio.sleep(0)
Answer
Advantages
- Usage is not capped by default personal profile quota. No irritating messages to wait until sundown. You set your own limits in billing and it goes against your API use, not personal account limits.
- Not browser based. You do not have to worry about typing URL, logging in, navigating to appropriate page.
- Python stack (wxPython). Easily add new functionality using Github Copilot/VSCode
- Works on Windows and Linux
Disadvantages
- Ancient stack (wxPython)
- Need programming experience to modify.
- Streaming API will not work for older models.
- Does not work on your phone
Sources
https://github.com/myaichat/wxchat/blob/main/answer.py
Next step: Adding multiple ChatGPT models