Using ChatGPT to accelerate persona development in business applications.

Michael O'Sullivan
UXR @ Microsoft
Published in
5 min readNov 1, 2023

Introduction:

Personas in Microsoft Business Applications leverage the jobs-to-be-done (JTBD) framework and their development typically follows a “qual, quant, qual” approach. Here, researchers carry out exploratory/discovery interviews to identify potential jobs, tasks and pain points, followed by a large survey to validate, rank and potentially even segment these, followed by more interviews to deep dive into the highest opportunity areas. This three-stage process generally yields three levels of persona fidelity, where researchers should have a low-fidelity persona after the first ‘qual’ stage, a medium-fidelity persona after the ‘quant’ stage and a high-fidelity persona after the follow-up qual stage.

While this method is very effective, it can be quite expensive and time-consuming. With the current race to implement AI, we need to move and unblock product teams faster than ever, especially in a space like Business Applications where there are tons of primary and secondary personas. For this reason, I recently started experimenting with ChatGPT as a way to accelerate this process. By combining ChatGPT with rapid surveys, I was able to develop three medium-fidelity personas in less time than it would typically take me to develop one low-fidelity persona through the traditional method. While I do not suggest my method as a replacement for the entire traditional process, I do see it as a potential alternative to the first stage (or two), especially when constrained by time or budget.

In this article, I briefly explain what I believe to be the drawbacks of the traditional method, review my new proposed method in detail (including ChatGPT prompts and survey questions) and close out by reflecting on where I think this method makes most sense, as well as how it might be improved.

Shortcomings of the current method:

Here I’m focusing primarily on the first two stages, as this is where I currently see ChatGPT having the most impact potential.

How I combined ChatGPT and surveys as an alternative:

Steps:

1. I asked ChatGPT to generate assumptions about the key JTBD for each user role, by using prompts like:

  • “I’m a UX researcher working on X product and want to create a persona for Y user role.”
  • “Are you familiar with the jobs-to-be-done framework?”
  • “What are the key jobs for this user role?” (If it provides a long list, ask for the top 6–8, or however many your personas typically exhibit, and in the format you prefer).

2. I then ran a survey with 10 participants+ (per user role) to validate and refine these JTBD, asking questions like:

  • “Here is a list of potential jobs for someone in a role similar to yours. You can think of these as responsibilities you might list on your resume or in a job posting. How well does this list match your experience?” [5 point Likert scale]
  • “For each job, please indicate if your role is typically responsible, accountable, supportive, consulted or informed.” (RASCI)
  • “Are there any jobs you would add/remove/combine/change?” [Separate questions, providing option for them to describe.]

3. The results showed that ChatGPT was around 80–90% accurate, so I refined the JTBD as needed based on the feedback.

  • In one case I simply turned one of the jobs into a task under another job as participants suggested they could be combined.
  • In other cases I slightly re-phrased or added an extra few words to a job if multiple participants mentioned that they would change it in a similar way.
  • I even used ChatGPT to help with this, using prompts like “How would you re-write this JTBD based on the feedback from these people [copy/paste responses]?” (Note: It won’t be perfect, but I found that it often phrased the jobs better than I likely would have and in much quicker time. I still made my own adjustments as needed, but it made things much quicker)

4. I then fed my refined JTBD back into ChatGPT and asked it to generate assumptions on the tasks and pain points associated with them.

  • “Here are the key jobs done by X role. Please list the key tasks and pain points associated with each job.”

5. I then ran a second survey with a similar sample size, explaining that I had refined the JTBD based on feedback from a previous survey. This time, the first part of survey asked participants to rank the JTBD and then the tasks in terms of how time-consuming they are, and to describe any ones I might have missed. Similarly, the second section asked them to rank the pain points for each job in terms of severity and to describe any ones I might have missed.

6. With all of this information, I was able to easily create personas with:

  • A list of the key JTBD, organised by how time-intensive they are.
  • A list of the key tasks and pain points associated with each JTBD, organised by time-intensity and pain severity, respectively (I was also able to add or re-phrase these as per the responses to the second survey).
  • Other information I chose to collect, such as RASCI, technical proficiency and industries.

Of course, the personas will be refined further as I do more research, whether through direct persona research or indirectly through other studies. Still, I think it provides a better base than the first stage of the typical method and in a much shorter timeframe.

Reflections on this approach:

As discussed, this approach allowed me to develop three personas to medium-fidelity in less time than it would typically take me to develop one low-fidelity persona using the traditional method. It also puts me in a strong position to carry out some variation of the typical ‘quant’ or segmentation step if I choose to, as I am quite confident in the information already gathered.

I think this approach makes a lot of sense for my specific situation, where I’m working across multiple products and there are lots of personas based on well-established job roles. It also makes sense for me because of the speed at which we need to operate right now, given the current race to understand how AI might benefit our users. However, if I was working on one specific product with only a couple of personas, I might rather the traditional method, as I would become more familiar with the personas and domains through interviews and deeper analysis. As it stands, I can direct my team to reasonably well-developed personas, but do not feel like an expert in these users myself.

One opportunity for a hybrid version of both of these approaches is to use ChatGPT to generate the JTBD, then bring in users for a short interview or focus group to review and relate them to their experience. This would take a little bit longer than the method I outlined above, but would allow the researcher to actually converse with and learn from the users. A colleague of mine is exploring this method at the moment, so we may consider a follow-up article to discuss her findings. If anyone reading this has tried anything similar, please do share your experience in the comments :)

--

--