How to Analyze Tens of Thousands of Open-Ended Survey Questions Using GPT-4 in Just 15 Minutes? Here’s Your Ultimate Guide!

Nicole Ni
4 min readOct 19, 2023

--

Summary: Open-ended questions in surveys consist of unstructured data, posing a challenge for data analysis. Traditional NLP analysis methods had their shortcomings. But with GPT-4, you can achieve high-quality analysis in just 15 minutes, estimated to cost less than a dollar.

I work at an online education company based in Silicon Valley. Our students are scattered globally. We collect feedback via surveys. There are three three open-ended questions in the survey. Every month, we receive tens of thousands of answers to these questions. Analyzing feedback from these questions has always been a big challenge!

But with GPT-4, analyzing these tens of thousands of unstructured data entries takes only 15 minutes and costs less than $1.

So, what’s the process? How do you craft effective prompts? Let me break it down for you!

The three questions are:

  • What did you enjoy about your experience?
  • Please provide an example of how you plan to apply, or have already applied any newly gained skills on the job (if you plan to).
  • How do you suggest we can improve your experience?

Previously, we had to manually organize answers based on student characteristics, like their employer. It was time-consuming, labor-intensive, and imprecise. But with the advent of ChatGPT, I had an idea — can we leverage ChatGPT’s capabilities?

The answer: Absolutely! And it’s incredibly powerful.

I experimented with GPT-4 for topic modeling on a sample dataset.

Step 1: Go to ‘Advanced Data Analysis’ and upload your data.

After purchasing Chat GPT Plus, enter the GPT4 interface, turn on the ‘Advanced Data Analysis’ feature, and upload your data file. I uploaded an Excel file with 20 open-ended questions, each receiving 20–25 text responses. GPT-4 first provides a basic analysis, like the number of rows and columns.

Choose ‘Advanced Data Analysis’ (Former Code Interpreter)
Enter the GPT4 interface, turn on the ‘Advanced Data Analysis’ feature

Step 2: Topic Modeling Analysis to Present Five Key Themes.

I first tasked GPT-4 with performing Topic Modeling on the data. Topic Modeling is an NLP technique that extracts the most important themes from a text collection.

After trying several prompts, I found:

Simply asking GPT-4 for topic modeling would result in it suggesting traditional NLP methods, offering Python code, or guiding users to use Google Cloud’s BERT model. These traditional methods were subpar.

Hence, I provided a detailed Prompt: “Can you use the GPT-3.5-turbo or GPT-4 model for analysis? Please avoid using Latent Dirichlet Allocation (LDA) or any external or traditional NLP methods.”

2. GPT-4 can only analyze one question at a time. So I instructed it to begin with the most crucial one: “How do you suggest we can improve your experience?”

After refining the prompt, the results were impressive. GPT-4 identified five main themes, such as Course Content and Structure and Technical and IT-related Feedback. For each topic, it provided a summary for better understanding.

Step 3: Present Examples for Each Theme.

For users, perhaps in addition to an overview, they also want to see what specific answers are included under each topic. In response to this potential need, I used the second prompt: Can you share with me 3 examples for each of the topics?

Step 4: How Many Responses Correspond to Each Theme?

After identifying the five key themes, it’s essential to know which topics have more feedback. I then instructed GPT-4 to analyze the responses and generate charts, leading GPT-4 to quickly produce the following graphics.

Bar chart generated by GPT4 to illustrate how many responses are associated with each theme
Bar chart generated by GPT4 to illustrate how many responses are associated with each theme

Cost Analysis with GPT-4’s API?

Concluding this simple analysis, I calculated the token consumption and cost of using the GPT-4 API instead of the chat interface. The overall expense was astonishingly low!

A crucial question arose: Does the size of uploaded data impact token consumption? I queried ChatGPT: “If I uploaded a file with 20,000 rows versus one with 200 rows, would the token consumption vary, given the same prompts and expected responses?”

ChatGPT responded, “The token consumption in GPT-4 (or any variant of the GPT models) is directly related to the input (your prompts) and output (the model’s responses). The file size or row count doesn’t directly influence token consumption.”

In essence, for a comprehensive analysis of the survey responses we received throughout 12 months (~ 840k words), the cost will be around $35.

That’s the power and transformation technology brings!

Besides this, I also experimented with GPT-4 for sentiment analysis and other domains. Due to article length constraints, I’ll share those in future posts!

--

--

Nicole Ni

Innovator based in Silicon Valley with a specialization in Data Science. Championing GenAI and advanced AI tools to tackle business and societal challenges.