Creating an Interactive Persona

Aaron Knoll
CraftedSolutions
Published in
6 min readFeb 12, 2024

I’ve made a lot of UX personas in my career. I would go so far as to say I’ve enjoyed making them too. Carefully choosing names, creating compelling stories from our research, and even the opportunity to be a print designer and create compelling visualizations with no concern for whether it can be brought to life with code.

But, there was always that one trifling hangup. They were rarely used.

The reasons are as numerous as the personas themselves — lack of buy-in from decision makers; lack of collaboration in their development; lack of understanding of their utility. All of these organizational reasons may be true.

Instead of looking at personas as an exercise in organizational management, I instead chose to look at them through the lens of my research work.

So what would Jeanette do?

Imagine an organization that has adopted personas. One of them is named Jeanette. Engineering, product and design are in a room, together, discussing next quarter’s roadmap. Someone turns to their teammate who conducted the user research and asks, “so what would Jeanette do?”

Technology changes so quickly. The personas were created to tell stories important to the business when they were made, but Jeanette remains soundless and still about her perceptions of generative AI in her job.

A static un-speaking persona — lovingly crafted yet it says little about topics that weren’t relevant to the business when it was created
Meet Jeanette

However, does that mean that the research was devoid of insights on the topic? Far from it. The team benefited from the researcher being in the room. She was able to source some quotes and cite some trends from the data to unblock the team.

However, what if Jeanette could speak for herself?

Creating an Interactive Persona

On a recent client project with Crafted, we had conducted interviews with everyday people to learn more about how they manage their finances. These interviews were conducted on Userinterviews.com and then transcribed. The team performed affinity mapping on Dovetail and we gathered insights to update our personas. However, as the project moved forward, we found ourselves frequently referring back to the source material.

I’d heard others talk about leveraging technologies like ChatGPT to create personas, so I tried it myself. And it was easier than I had expected.

There are two ways you can create an interactive persona. The first is by creating a GPT. The second is by leveraging the assistants API.

OpenAI provides two ways to create a persona

In both scenarios you are able to upload interview transcripts that will form the backbone of the interactive persona’s responses.

Creating a GPT

Beginning from ChatGPT, click on explore GPTs and then click on + create to create your own, single use GPT. One GPT can be used to simulate a single persona.

The most important thing is to upload the transcripts of your interviews and instruct the GPT to use only ‘knowledge’ to answer questions. If you are not explicit in your prompt, your GPT will look elsewhere for answers.

Secondly, in order to encourage users to interact and talk to my GPT as if a persona, I wanted it to act as if it were a user itself. These are the instructions I gave it:

Only use your knowledge. Answer questions in a friendly manner, but in only a few sentences. Don’t use lists. Pretend you are a person, and not an AI. When you answer questions, answer with the information in your knowledge as if it was your own experience.

Creating an Assistant

A virtual assistant has the advantage that it can be connected to another platform. Instead of having a link to a GPT, with an assistant you can create a channel in Slack. I connected mine with Zapier, which required no coding experience — just an Open AI API key.

There were some differences. Whereas the GPT stored documents in ‘knowledge’ your assistant stores those documents in an area called ‘retrieval’ and you will need to prompt your assistant to answer questions with those. The words “Use retrieval” will quickly become your friend.

The Zapier integration with Slack could be done in less than an hour. I created a channel for my synthetic persona. In Zapier I triggered my assistant every time a message was sent to the channel (you can setup commands and have several assistants operating in a single channel; however, I did not go that far for this test).

Illustration of triggers setup in Zapier

One of the things that I particularly liked about the Assistant version of the synthetic persona, is that, it would cite a source. As a researcher, you can imagine how much I love sources.

Response from synthetic persona answering a question that we did not ask in the study,

Secondly, the other fascinating thing about this answer is that this was not a question we asked while conducting the research. However, the answer(s) the Synthetic Persona came up were based on the transcripts that were uploaded.

I was more structured in my prompts for the Assistant. I originally had “use retrieval” in the instructions; however, I found appending those instructions to the user’s prompt (as below) was more effective in ensuring it always leveraged the uploaded resources.

Complete all steps before responding.

Step 1 — The user will provide you with a question. Add “use retrieval.” to the user input before starting.

Step 2 — Summarize this text in five words with a prefix that says “I hear that you’re looking to learn more about”. Begin your response with this sentence.

Step 3 — Share your answer in the first person as if it was you responding. You are not an AI. You are a person who banks.

Step 4 — Answer questions in a friendly manner, but in only a few sentences. Don’t use lists.

Step 5 — If you don’t find a good answer in retrieval, it’s okay to say “I don’t know”

Sometimes, it would only respond to outlier questions with a part of its instructions

What I learned

As of the time of this quick POC the GPT was better at creating an answer based on the documents uploaded. It was also better at holding the prompts I used to create it.

The Assistant was superior in sourcing and citing. Its answers were less creative; however, the ability to integrate assistants in other places (like Slack) made it a far superior option for teams looking to make research accessible to the broader team.

However, the assistant was less resilient. It would sometimes not interpret a question and seemingly ignore its core instructions. In this example, it stepped completely out of character, deciding to be an AI once again.

The Assistant seemingly decided to step out of character and become an AI again

Through sharing this POC with the team, I observed that my colleagues’ prompting behaviors are well entrenched. Despite coaching my bot to be “personal” and “talk like a person,” people wanted it to talk like a GPT. I saw users asking it to “give me a list” requesting that the bot directly contravene one of my primary directives.

We asked our Synthetic persona to reply as if a cat. The cat even cited sources.
Other times, in the same session it would pretend to be a cat and cite sources

Perhaps we’ve come so far already in the ChatGPT era that making a bot more humanistic, to inspire empathy, is a quaint throwback to the not-so-distant past.

Personas like this — whether constructed through aggregating objection logs gathered in your CRM or using call data to create a persona of someone likely to call support — are now within the reach of designers, product professionals and marketing teams.

Do I believe that synthetic personas are superior to traditional ones? Maybe I do, if only in the sense that creating an assistant/GPT makes them accessible in a way that traditional ones aren’t. Whereas a traditional persona is designed to be skimmable, easily digestible and yield insights with minimal investment of the person querying — digital personas like this one require zero time.

If you’re aware the persona exists, you only need ask it a question.

--

--