Tiny LLM hacks: wxPython ChatGPT chat app.

alex buzunov
CodeX
Published in
2 min readMay 10, 2024

Building up on this I used streaming functionality to of gpt-4 API to make interactive app.

wxPython

With help of Github copilot and ChatGPT after 2 hours it was up and running.

It uses streaming API for gpt-4

response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a chatbot that assists with Apache Spark queries."},
{"role": "user", "content": prompt}
],
stream=True
)

So i see the answer starts within seconds

        # Print each response chunk as it arrives
print("Streaming response:")

for chunk in response:
if hasattr(chunk.choices[0].delta, 'content'):
content = chunk.choices[0].delta.content
print(content, end='', flush=True)
#out.append(content)
output_ctrl.AppendText(content)
await asyncio.sleep(0)

Answer

Advantages

  1. Usage is not capped by default personal profile quota. No irritating messages to wait until sundown. You set your own limits in billing and it goes against your API use, not personal account limits.
  2. Not browser based. You do not have to worry about typing URL, logging in, navigating to appropriate page.
  3. Python stack (wxPython). Easily add new functionality using Github Copilot/VSCode
  4. Works on Windows and Linux

Disadvantages

  1. Ancient stack (wxPython)
  2. Need programming experience to modify.
  3. Streaming API will not work for older models.
  4. Does not work on your phone

Sources

https://github.com/myaichat/wxchat/blob/main/answer.py

Next step: Adding multiple ChatGPT models

--

--