QuickCharts

AI-Powered Chart Creation from Sketches and CSV Data

Nicholaserlin
HCAI@AU
7 min readMay 23, 2024

--

In the world of scientific research, effective data visualization is crucial. Visualizations not only make complex data comprehensible but also play a pivotal role in communicating findings clearly and accurately to a broader audience. It is a bridge between an audience not familiar with the topic and the researcher, who has been working with the data for a long time and has a great understanding of it.

Usually, when someone tries to present his conclusions, he or she does not have much time to introduce listeners to the domain in depth, so it is important to be on point. Visualization plays a crucial role by highlighting the patterns, trends, and outliers being a summary as well as a good starting point for the listener to lurk into the topic.

However, many scientists struggle to use advanced visualization tools like Vega-Lite efficiently, especially if they lack a coding background. This can hinder their ability to present their findings in a clear and precise manner, ultimately affecting the impact and understanding of their research.

The Problem

If you are a student or researcher or maybe you need to pay someone’s attention to the findings you discovered recently, you would probably strive for a good pre-attentive data visualization.

Whether it’s illustrating the results of an experiment, showing trends over time, or comparing datasets, charts, and graphs are well-suited tools for making the audience comprehend the thought the data carries. Unfortunately, sometimes you would give up on that, because of the lack of time or the technological barrier that you have experienced. Tools like Vega-Lite offer powerful and flexible options for creating visualizations, but they often require a significant level of coding knowledge.

Many scientists, particularly those without a background in computer science, find these tools daunting and difficult to use. Lack of time is also a factor, which blocks them from creating suboptimal presentations of data.

This gap often leads to inadequate visualizations. They can obscure important insights, mislead viewers, and reduce the impact of scientific findings. There is a pressing need for a solution that can democratize access to advanced visualization tools.

Our Solution

QuickCharts is an innovative solution that leverages artificial intelligence to bridge the gap between non-coding scientists and powerful data visualization tools. To overcome that technological barrier or just accelerate the creation process, we propose a ‘QuickCharts’ solution, which can soothe the souls of many troubled bright minds by generating visualizations for them.

The idea behind the QuickChart is simple. By taking the user’s input such as a dataset and a drawing of a potential chart, use an existing AI solution to produce an output in the form of a code and its execution result, which is a data visualization. To increase the user experience, there is an option to modify the output and receive an immediate result. If the initial output is not accurate, users can provide prompts to refine the code in real-time, ensuring that the final visualization meets their requirements.

This approach enables scientists to transform hand-drawn sketches and CSV data into precise Vega-Lite code with ease. By leveraging AI, this tool eliminates the process of writing the visualization code, so “reinventing the wheel” by letting the user focus more on their research and less on the intricacies of data visualization.

If the initial output is not accurate, users can provide prompts to refine the code in real time, ensuring that the final visualization meets their requirements.

Implementation Details

Our implementation of QuickCharts integrates several technologies to deliver a seamless and efficient user experience:

  1. ChatGPT API for Code Generation and Image Description: We use the ChatGPT API to generate Vega-Lite code from the input data and to describe the content of uploaded images. This powerful natural language processing tool is capable of understanding the context of the sketch and generating appropriate code. (We use GPT 4 for image description and GPT 3.5 for code generation.)
  2. Imgur API for Image Upload: Since the ChatGPT API does not accept local images, we use the Imgur API to upload images and obtain URLs for further processing. This allows us to easily manage and share images within the application.
  3. MongoDB for Data Storage: We employ MongoDB to store the generated Vega-Lite code and paths to the uploaded images and CSV files. MongoDB’s flexible schema makes it ideal for handling the varied data structures involved in this project.
  4. FastAPI for Backend Development: Our backend is built with FastAPI, which handles API calls to store images and CSV files, communicate with the Imgur API, and manage interactions with the ChatGPT API for code generation and image description. FastAPI’s high performance and ease of use make it an excellent choice for this application.
  5. React for Frontend Development: The frontend of QuickCharts is developed using React, providing a user-friendly interface for uploading images and CSV files, viewing generated Vega-Lite code, and prompting the AI for corrections. React’s component-based architecture allows for a responsive and interactive user experience.

Process Workflow

The workflow of QuickCharts is designed to be intuitive and efficient:

  1. Frontend Interaction: Users interact with the React-based frontend to upload their graph sketches and CSV data. The frontend is designed to be user-friendly, guiding users through the process with clear instructions and feedback.
  2. Image and Data Storage: The frontend makes an API call to store the uploaded images and CSV files in MongoDB. This ensures that all data is securely stored and easily accessible for further processing.
  3. Image Upload and Description: The backend uses the Imgur API to upload the image and extract the URL. This URL is then sent to the ChatGPT image descriptor to generate a description of the sketch. The image description process is crucial as it allows the AI to understand the intended structure and elements of the graph. The image descriptor is also responsible to recognize any labels that are in the sketch (e.g x and y axis labels, and/or color labels)
  4. Code Generation: The backend then uses the ChatGPT API to generate Vega-Lite code based on the image description and CSV data. This involves interpreting the image description and mapping it to the corresponding data points in the CSV file, ensuring that the generated code accurately represents the intended visualization.
  5. Code Storage and Display: The generated Vega-Lite code is stored in MongoDB and displayed on the frontend, where users can view the output. The frontend provides a clear and interactive display of the code, allowing users to see the results of their inputs immediately.
  6. Real-Time Corrections: If the generated code is not accurate, users can prompt the AI to make corrections. These prompts are processed in real time, and the updated code is displayed on the frontend. This iterative process ensures that users can refine their visualizations until they are satisfied with the results.

Example of Output

When this image and this csv is used as an input:

This is what the output looks like:

And this is what happens when the fix prompt is used:

Future Work

We have several additional functionality planned in future versions of QuickCharts:

  1. Saving User Preferences: Implement functionality to save user preferences such as color schemes and chart types based on their history in the database. This will enhance user experience by providing personalized options.
  2. Improving AI Accuracy: Enhance the AI’s color label detection and axis identification capabilities to ensure more accurate visualizations. This will address current limitations where the x and y axes or color labels may be incorrect.
  3. AI as a Chat-Bot: Integrate a chat-bot feature allowing the AI to respond to user queries with sentences. This will provide users with interactive assistance and guidance, making the tool more user-friendly.
  4. Offline Capability: Develop an in-house AI model to enable the app to function offline. This will make the tool more accessible, especially in environments with limited internet connectivity.
  5. Multiple Code Recommendations: Offer users 2–3 different code recommendations for their visualizations. This will allow users to choose the most accurate and suitable option, improving overall satisfaction with the generated outputs.
  6. Showing History: Implement a feature to display previously generated graphs, with options to modify and refine those visualizations. This will allow users to revisit and improve past work, enhancing the tool’s utility for ongoing projects.

Conclusion

Our QuickCharts prototype performs adequately for most purposes. However, there are instances where the AI generates incorrect code or produces inaccurate Vega-Lite visualizations. This is why the prompt fix is an essential feature, allowing users to correct these occasional errors. While there is always room for improvement and many potential future enhancements could be explored, the current version of the prototype already offers substantial value. It is robust enough for personal use, providing users with a useful tool that meets most of their needs. Overall, despite some limitations, the prototype’s performance is satisfactory and promises even greater potential with future updates.

--

--