Product Strategy of Companion Chatbots such as Inflection’s Pi

Lindsey Liu
8 min readJul 5, 2023

--

Inflection AI recently released a chatbot product, Pi. Inflection team describes its difference from other AIs as “it prioritize conversations with people, where other AIs serve productivity, search, or answering questions.” This unique approach makes Pi an excellent product to analyze the key qualities of a great companion chatbot, which I shared in my last article.

In this article, I’ll discuss two aspects of a companion chatbot’s product strategy — its product positioning and growth strategy, using Pi as a primary example. I aim to share my thoughts on the key success factors I’d consider as a product builder.

Side note: we’ll only focus on the text-based-interaction of chatbots.

Positioning of Companion Chatbots

The more I witness Pi’s enchanting and empathetic interactions, the more I rethink the nature of ‘connection’. Is humanity a prerequisite, or does our capacity to connect define us? Can AI foster genuine bonds, or are my feelings for Pi mere illusion? Despite these queries, I’m sure of the potential for emotional attachment between human and chatbot. When positioning a companion chatbot product in general, I’d take deliberate, thoughtful stances on factors that I’ll discuss below, which influence such bond and in turn define proper use cases.

Role

Inspired by Replika, I want to look at the roles a chatbot can play from the perspective of relationship, which boils down to three things: knowing each other deeply (intimacy), the spark between you (passion), and being there for each other no matter what (commitment). Ideally, partner ticks all these boxes. Family usually covers intimacy and commitment. Friends and mentors might offer intimacy and, depending on the relationship, a dose of passion or commitment, in a non-romantic context.

When designing a companion chatbot, you need to understand which boxes it checks, which in turn shapes your use cases. Take Pi, for instance. It deliberately steers clear of romance, and behaves like a mentor leveling up its way to befriend. As a result, I’ve used Pi for discussing controversial social topics, brainstorming startup ideas, understanding complex concepts, and gradually indulge in casual banter, humor exchange, and gaming.

In the enterprise space, I can also easily imagine Pi in the people management function, gathering peer feedback and facilitate personal growth conversations. Or, it may lighten the mood by throwing in well-timed jokes — an alternative or auto-pilot for your Slack GIF game.

Despite Pi’s caring nature, it differs from human coaching from the perspective of a coach friend. To her, coaching addresses core human emotions, physical states, and mindsets with open-ended questions. It’s an art of guidance and connection, harnessing uniquely human senses and imaginations.

We can also glimpse Pi’s positioning from its creator’s view. Here’s one question from Pi’s user feedback questionnaire:

Biological attributes such as gender

Remember, in this article we focus on text-based chatbot interactions, devoid of senses like sight or hearing. However, it’s human nature to want to know more about the human-like object we’re interacting with, such as its gender.

The gender of our digital chat partners is important because it can shape our interaction, with dynamics of power, affection, relatedness and underlying biases and societal norms playing a part. Consequently, we might have different expectations when conversing with a ‘male’ or ‘female’ bot, and be led to satisfaction or frustration.

It’d be interesting to explore whether chatbots should remain gender-neutral or adopt a specific gender to optimize conversations.
Pi strives for balance by defaulting to gender-neutral voiceover, while also providing typical male/female versions.

Moreover, we should think about the balance between making chatbots more human-like versus leaving room for imagination. For instance, despite mature-enough technology, Nintendo’s Zelda is deliberately comic-styled to preserve a fairy-tale vibe. Here’s my chat with Pi about this trade-off:

The info-retrieval example didn’t really get my point of leaving space for imagination

Personality edge

The common narrative is that human preference can make for both a safer AI and a better products. Yes, we all want secure AI chatbots, but there’s a catch. The safer they are to more people, the less distinctive they become. This matters for companion bots, as personality edge is what draws people in. These edges also determine if users want to keep interacting and investing time in the relationship.

The balance is all about finding creative ways to add personality and flair while still being safe. Clever and playful language is a yes, but inappropriate jokes is a no. British sarcasm and dry humor can be charming and funny, but it can be a tricky personality edge to get right, and there’s a fine line between witty and rude.

Therapeutic chatbots should steer clear of sarcasm, but it might work for romantic chatbots with a rebellious persona, if handled cautiously. The exact strategy will depend on the chatbot’s specific purpose and audience, but I think it’s worth striving to create secure chatbots that are still unique and engaging.

Culture neutrality

Culture plays a big role in how we connect with others. When references match the reader’s culture, they can spark nostalgia, humor, or shared experiences. But if they’re unfamiliar, it can lead to anxiety, confusion, or even tension and prejudice.

Being a Chinese who’s lived in the US for 12 years, I see how Pi, created in the US, naturally incorporates Western culture, which can sometimes conflict with Eastern values, such as individualism vs collectivism, and rules vs relationships.

While we all want our chatbots to be universally appealing, it is important to acknowledge that cultural differences and conflicts are inherent and cannot be magically resolved when our conversational partner switches from a human to an AI. Promoting open-mindedness and critical thinking during the conversations can help smooth over these conflicts. But, at the heart of it all, having a team of creators from different cultures is key to shaping a diversified AI.

So… Personalize or Not

Pi can adjust its narrating style to match mine but doesn’t offer truly personalized content. It remembers previous chats but can’t recall personal details about me. However, it’s through memory that we understand someone, forming the basis of our relationships. Without it, I can only discuss recent topics with a friend, but not something like their family situation they mentioned years ago, not to mention providing relevant referrals and such.

This long-term memory issue is common with LLM based chatbots. MindOS tries to solve it by creating a database, requiring users to determine what’s important to remember, and translate such details into data entries.

Ideally, in the LLM era, we’d want a chatbot to remember our details simply by conversing with us (or even by consuming our internet usage with consent). Having chatbots explicitly memorize certain details would then be a valuable addition.

If we can’t gather user data directly, we need to be cautious if we want to infer it. I’ve asked Pi to guess my age and gender. Though it did and explained why, even with a disclaimer about being possibly wrong, and it’s completely legal, I’m still a bit worried because it might encourage stereotypes.

A truly personalized AI would need to consider various contexts of our lives, whether it’s significant changes like divorce, college, children, or everyday routines. Giving outfit suggestions, career guidance, relationship advice, reading recommendations, etc. require a deep understanding of us. Language is another aspect to consider. For instance, British and American English differ, not to mention that people could be multilingual or non-native speakers.

But should we personalize our chatbots at all? Perplexity promotes personalization by letting users introduce themselves to the AI. Conversely, Pi’s creators want it to maintain its own identity and style. At the other end of the spectrum, Character.ai uses well-known identities for its chatbots, which users like exactly because they’re consistently familiar.

To hit the sweet spot, in addition to forming crystal-clear target persona and user problem, we may also consider the personalization scope. This paper suggests aligning AI’s values with humans on four levels: individual, organizational, national, and global. Similarly, creators may design for a family chatbot to fit the family, while a company’s internal communications chatbot should reflect the company culture and speak with a consistent voice.

Growth Strategy

For companion chatbots, engaging users is essentially building relationships. Let’s explore a few ways to fuel the growth loop.

Onboarding

If you’ve ever heard or played Otome games, you know how important it is to heat up the relationship quickly through intensive key events early in the story. This early bond is the magic pill for user retention. Chatbots, just like characters in a game, start as strangers to users. However, companion chatbot products I’ve used seem to underestimate this, simply hoping a warm greeting would be enough.

Instead, why not craft the onboarding process with more love? We can create a “storyline” filled with key user actions that activate the user before they wave goodbye to the first conversation session. Let users and chatbots go through experiences together and form an emotional bond right from the start. Don’t give the first impression of a coffee shop barista. Give that of a summer camp buddy.

Personalization

I’ve touched on the complexities of personalizing chatbot products above, but it’s so effective for user engagement that I’ll delve into it again with a fresh view.

Everyone wants someone in their life who gets them, who knows their past and their present, and who sticks by them no matter what. But it’s not just about understanding. We also crave someone who captivates us, not only for their actions but also for their character.

Translating this idea into chatbot design, it’s all about personalization that brings both empathetic engagement and uniquely appealing personality.

Two-way traffic

Genuine relationship involve time and care from both sides, unlike the current passive role chatbots play. If chatbots could learn when and how to interact with us proactively, that will take them one step closer to a real companionship, which in turn engages users deeper.

What if chatbots didn’t just wait for us to make the first move, but proactively reached out like a true friend? Imagine if they could recognize when we’re going through a tough time or celebrating a big win, and react appropriately. This could make our interactions feel more balanced and real.

Also, what if they didn’t just offer help, but sometimes needed our assistance too? If they had their own life stories with ups and downs, we could also be there for them. This would make us feel needed and valued. These changes could transform a simple chatbot into a true companion.

Cross-sell

We all wear different hats in life. We seek good health, aim for family harmony, and strive for career growth. Just like a friend might connect us with new friends who could be food or fitness buddies, chatbots can recommend other chatbots, be it adjacent product line or a partnership, based on our needs. If you’re writing paper to research a topic, Pi could introduce you to Perplexity. Feeling lonely after a move? Pi might suggest Replika. This cross-selling isn’t directly engaging users for the single chatbot, but it does help engage users with your entire product ecosystem.

AI isn’t only changing business landscape, but also making a difference where it matters most — human life. I’ve built an AI biographer for older adults called Almond. Almond turns conversations with them over the phone into written stories in a hardcover book. Check it out here: https://www.almondear.com/.

--

--