Generate Charts using OpenAI Code Interpreter — The API Way
Context
Feeling too sluggish to create your own charts? Well, fret no more! You’ve stumbled upon the perfect place to unleash your laziness and let AI do the heavy lifting for you. 😎
Generating charts using OpenAI ChatGPT is nothing new, you can already do it if you are a ChatGPT Plus subscriber and use Code Interpreter. However, it’s not the ideal method for practical applications or product development. Fortunately, OpenAI has opened access to Code Interpreter via API in the form of Assistants in their first-ever Dev Day that was held on November 6th. So here I am, showing its capabilities in an app to draw charts from input data.
So, what is Code Interpreter?
Code Interpreter, a built-in feature of ChatGPT, serves as a secure Python programming environment, enabling users to execute Python code and accomplish a diverse range of tasks. It functions as a virtual workspace isolated from the user’s computer system, safeguarding their data and preventing unintended interactions.
Unfortunately, if you are a free subscriber to ChatGPT, you won’t be able to access it as of now, unless you are a developer 🤓
Enough theory, let’s get our hands dirty
Here’s what we will use to quickly create an MVP product that generates charts out of data.
- VS Code
- OpenAI API Key
- OpenAI Assistant ID
- GitHub repository with complete code
Step 1: Create OpenAI API Key
Head over to https://platform.openai.com/ and sign up. If you are already a user you can skip this step. Please note that usage of API will incur costs.
I recommend having multiple keys for each project. As you can see in the screenshot below click on “Create secret key” to generate the key. Once you click you will be able to see the key once, make sure to save it. Once you move away from this page, you won’t be able to view the key again.
Step 2: Create an Assistant
Now as per the OpenAI spec, to use Code Interpreter, we need to create an Assistant. We can do it via API but I imagine we need only one Assistant for our MVP, so it’s easier to create it from the UI. Follow the screenshot below to create Assistant and make sure to enable Code Interpreter. I have also selected 3.5 Turbo as it costs less 😆
Once you save, copy the assistant ID, we will be needing it soon.
Step 3: Create a Gradio project
You can fork my GitHub repository here: https://github.com/sumitsahoo/chart-gen
Step 4: Create .env file
The forked repository will have a sample .env named .env.template. You can rename the same to .env and add your OpenAI Key and Assistant ID. If you have opened the project in VS Code, .env will look something like the below screenshot:
Step 5: Run the app
Install the dependencies and create a virtual Python environment with Poetry using the below command
poetry install
Now use the virtual environment
poetry shell
Run the app
python3 main.py
The app should be running locally at: http://127.0.0.1:8080
As you can see in the above screenshot, a bar chart has been generated from the input data. I just asked Bard to give me a sample data 🤓
Code explanation
Well I tried to explain the code with comments whenever possible and it should be easy enough to follow but still let me explain the core logic.
Step 1: Create ChartUtil class and initialize OpenAI
from openai import OpenAI
from src.util.log_util import LogUtil
import os
import time
class ChartUtil:
def __init__(self):
self.log = LogUtil()
self.client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
self.chart_out_path = "./outputs/chart.png"
As you can see, we have imported a few modules, initialized OpenAI and defined a chart output path where the chart image will be saved.
Step 2: Define a function to generate a chart
def generate_chart(self, message):
# Prepare the prompt
ci_prompt = "Please generate a chart using following data: \n" + message
Here message is input data entered by a user from Gradio UI which I will explain a bit later. Also, we have added a string before our data asking OpenAI LLM to create a chart for us.
Step 3: Create a Thread
# Create a thread and run the assistant
thread = self.client.beta.threads.create(
messages=[
{
"role": "user",
"content": ci_prompt,
}
]
)
Use OpenAI client that we created earlier to define and create a Thread. We have to add user input as a message with role and content. Since the input is from a user, the role is “user”.
Step 4: Run the Thread
# Run the thread
run = self.client.beta.threads.runs.create(
assistant_id=os.environ["OPENAI_ASSISTANT_ID"], thread_id=thread.id
)
Here we are running the Thread with the Assistant ID that we defined in the .env file. Also, we need to pass the thread ID from the Thread object. Now the Thread is running in the background.
Step 5: Check the Thread run status
Now this is a tricky one, there is no callback option so have to query the Thread run status in an interval to check if the run has finished or failed. Here is the code which is inside a while loop with a sleep statement to pause for 1 second.
# Refresh the run to get the latest status
run = self.client.beta.threads.runs.retrieve(run_id=run.id, thread_id=thread.id)
if run.status == "completed":
self.log.info("Generated chart, Run finished")
The status can be: in_progress
, cancelled
, failed
, completed
, or expired
. Here we are more interested in whether it is completed or failed.
More about the API docs here: https://platform.openai.com/docs/api-reference/runs/step-object
Step 6: Get the latest message generated by the Assistant
Now when the Thread run is finished, the Assistant will generate a message and push it into the message array. We just have to query the latest message. I know this is a bit messy but API may change in the future to further simplify the response.
# Get the latest message in the thread and retrieve file id
self.log.info(messages.data[0])
image_file_id = messages.data[0].content[0].image_file.file_id
content_description = messages.data[0].content[1].text.value
# Get the raw response from the file id
raw_response = self.client.files.with_raw_response.content(file_id=image_file_id)
As you can see in the code, each message generated by the Assistant can have more than one content, the image being the 1st element followed by the text description.
In the first content array, we can fetch the generated image file ID and use the same ID to fetch the raw content which is a binary object, in this case, it is a PNG image.
Step 7: Delete the temporary image file and store it locally
Since we do not need to store image files we can just delete it and save it locally instead.
# Delete generated file
self.client.files.delete(image_file_id)
# Save the generated chart to a file
with open(self.chart_out_path, "wb") as f:
f.write(raw_response.content)
return (self.chart_out_path, content_description)
In the above code, we first deleted it from OpenAI and then saved the file in the path we defined earlier. Now we can return the image file path and content description as a tuple so that it can be displayed in Gradio.
Step 8: Define the UI
Our major work is already done, now we just have to call the generate_chart method from Gradio UI.
gr.Interface(
fn=self.chart_util.generate_chart,
inputs=[gr.Textbox(label="Enter the data for chart generation")],
outputs=[
gr.Image(label="Generated Chart"),
gr.Markdown(label="Chart Description"),
],
allow_flagging="never",
)
You can find more about Gradio Chatbot here: https://www.gradio.app/docs/chatbot
That’s it, all done, you can run the app and generate as many charts as you want (not for free of course 😅). As I mentioned before I have only included code snippets, full project can be found here: https://github.com/sumitsahoo/chart-gen
Feel free to fork and modify as per your need. In this example we just created charts but you can do a lot more 🤓
Alternatives
So far I have shown you how to use OpenAI Assistants API which uses Code Interpreter but this is not the only way to generate charts.
You can try CodeInterpreterAPI as suggested by LangChain: https://blog.langchain.dev/code-interpreter-api/
or, you can also check out Open Interpreter here: https://github.com/KillianLucas/open-interpreter
Hope you have learned something. Since OpenAI Assistants API is still in beta, I will try to keep the code updated and push updates to this Article when necessary so that it stays relevant 👍🏻
Thanks.