Will AI strengthen or erode human-to-human relationships?

Michelle Culver
13 min readJun 28, 2024

--

“We cannot achieve what we cannot imagine.”

— Elise Boulding

In the face of overlapping mental health crises among young people and a mounting loneliness epidemic, technologists, investors and product developers are racing to build generative AI products to fill a void of human connection. We already have chatbot therapists, girlfriends and tutors, and the pace of new development is astounding. AI is becoming more human-like (consider GPT-4o’s imminent enhanced voice mode) and in many cases surpasses human capabilities. Today’s young people may experience less and less differentiation between “real life” relationships (a friend made on the playground), digital relationships (a friend made playing Fortnite) and bot relationships (a non-human friend).

In my personal and professional life, I’ve seen the promise for chatbots to support human potential: I’ve leveraged Playlab.ai to unlock youth agency while leading the Reinvention Lab at Teach For America and have even experimented with a Replika boyfriend bot (with my husband’s blessing & shared curiosity about this tech advancement). Soon these bots will be able to power avatars in mixed reality experiences; show up more realistically in games and experiences we know and love, weave throughout social media, and continue to blur the edges with reality.

But when the stakes for youth wellness and general social cohesion are so high, it’s essential to prioritize pro-social AI and relationships with chatbots that enhance — rather than erode — our capacity for human connection.

Human development research and neuroscience teach us that humans begin to attune to the needs, voice, touch and body language of others starting as early as infancy. This process, known as social attunement, plays a critical role in promoting healthy social development, including the formation of empathy, effective communication skills, and strong interpersonal relationships. As social and embodied creatures, we learn, grow, regulate, heal and celebrate in community. We advance as a society when we feel inspired by and accountable for the wellbeing of others. Even as our definition of human relationships evolves to account for the meaningful opportunities to connect digitally with other people, the next generation may feel less connected to themselves, each other, and the qualities that make us distinctly human if bot relationships replace or eclipse human ones.

Fortunately, the future hasn’t occurred yet, so we can still shape it into one that nourishes young people’s capacity for human connection. For months, I’ve been meeting with leading educators, technologists, youth activists, mental health professionals, investors, researchers and community builders to explore possible futures with chatbots. The case for optimism lies in our ability to envision ideal relationships with AI and help young people navigate this new wave of bots. If we do, we’ll protect and live more fully into our humanity.

Evaluating the pro-social qualities of chatbots

Responsible tech creators and investors already consider the criteria of safety, data security, privacy, bias and equitable access. We need to begin measuring and defining AI’s pro-social capabilities as criteria for investment and development as well.

I’ve created a resource to proactively envision how young people might relate to and utilize chatbots, with different impacts on human connection. The framework below maps four different possible futures, each representing the most common chatbot experience for young people.

The X axis represents young people’s predominant relationship with chatbots.

On the right side of the X axis are TOOLS. These chatbots help users complete a particular task or set of tasks, and while they may be conversational in nature (like ChatGPT4), they are not designed explicitly to replicate an emotional relationship. Users get some kind of service from the tool, which they can then apply back in the human world. Here, the emphasis is on utility: helping the person using the tool to accomplish their desired outcome in a way that preserves human agency and process knowledge. Tools like Pi are relational, but they don’t pretend to be human. Like presumable future iterations of Google Assistant or Siri, they focus on helping humans accomplish tasks (soon to include personal tasks such as giving life advice and planning), rather than serving as a companion.

On the left side of the X axis are COMPANIONS. These chatbots intentionally encourage users to foster a relationship that simulates human relationships, often including emotional attachments. Companions are anthropomorphic, conversational and may be “embodied,” whether through virtual reality or an avatar, to match a user’s preferred appearance and voice. They can be professional, romantic, sexual, therapeutic, instructive and/or philosophical. An example is Replika, which boasts 10 million users and Character.AI, which reports 3.5 million daily visitors and has the third highest number of unique monthly visits behind ChatGPT and Gemini. The majority of Character.AI’s users are 13–24, who use it mostly for fandom (having a personalized experience with a celebrity crush or favorite anime or video game character) and personal relationships to address feelings of loneliness.

The Y axis represents whether bots strengthen or erode the user’s capacity for human connection. At the top of the Y axis is when the experience of engaging with chatbots brings users more confidence, skill, agency and ease in relating to and connecting with other humans. In this version of the future, chatbots STRENGTHEN young people’s capacity to be in relationship with the humans they care about. In this world, pro-social bots can create a judgment-free space for users to seek counsel, explore hidden or stigmatized aspects of their identities, work through conflict, roleplay a difficult interaction or consider perspectives they might not otherwise be exposed to. Chatbots in these roles have been described as “rehearsal spaces for interpersonal communication.”

At the bottom of the Y axis is when chatbot users see their capacity for human connection ERODE. If ongoing relationships with chatbots create unrealistic expectations for how humans should be, youth may later struggle when faced with the multitude of needs, desires, values, communication styles and physical forms of their friends, classmates, lovers and colleagues. Rather than understanding that human relationships are worth it, despite being inherently difficult and messy, they may retreat and deprioritize them. In some cases, people with deep depression, social isolation or severe social anxieties may benefit from AI’s positive psychological buffer from loneliness. Yet in this end of the continuum, this temporary relief becomes society’s solution that we rely on prematurely and routinely rather than intentionally leveraged for periods of designed support, resulting in a more global degradation of human connection.

Four possible futures with chatbots

The intersection of these axes creates four quadrants or possible futures, representing the predominant experience for young people using chatbots.

Quadrant 1: A future where AI builds our capacity for human-to-human connection

In the top right hand corner is a world where someone primarily uses TOOLS to STRENGTHEN human connection. The ultimate goal in this future is to leverage AI to get closer to the people we care about or wish to prioritize. Imagine a young person turning to chatbots to explore perspectives outside their comfort zone or ask for help in repairing relationships. A young person can talk to chatbots if they are too embarrassed to ask someone else about an aspect of their identity, health or dating life, especially if those experiences are stigmatized in their home or community. They can practice speaking across lines of difference without burdening someone with a marginalized experience and ultimately show up more successfully in a classroom, social event, sports team, or workplace. Given the unique characteristics of AI, the tool can synthesize insights across resources and offer advice or practice space without judgment to make it easier for the young person to take on the emotionally complex work of human relationships.

I found Pi AI useful in my own life. For days, I tried to convince my oldest daughter to drink more water. I noticed she had painfully chapped lips and I knew it would help. But nothing I did or said got through to her. I’d run out of ideas, and our relationship was getting tense. I vented to the chatbot about how something seemingly small wasn’t getting through and asked for suggestions. After generating a round of ideas that I’d already tried, Pi then offered one idea that ultimately worked: to reframe the task not as something she needed to do, but as something that could actually make her feel good and healthy. When using the language prompted by Pi, “You deserve to feel good in your body,” my daughter responded right away: “Mom, if you’d just said it that way days ago, we wouldn’t have had this long argument!” What a relief.

In another instance, I asked ChatGPT to suggest a date night with my husband that aligned with his interests and availability. Given our busy schedules, I was grateful to outsource the planning that often gets in the way of prioritizing the time together or being present. As AI becomes increasingly more agentic and able to act on our behalf, we can productively outsource more of this labor to help us get to the human connection we value most.

Quadrant 2: A future that offers distinct, but meaningful relationships with both humans and AI companions

In the top left hand corner is a world where we have a balanced mix of human relationships and bot COMPANIONS that together, STRENGTHEN human connection. Young people might use their companion bot to simulate and practice skills related to building rapport, active listening or emotional regulation and then apply them with greater confidence and ease in their human relationships. They might add a bot to a group chat with close friends to share music, get ideas and play games. They might have both bot friends and human friends, with time spent meaningfully with both.

I hadn’t fully considered this future state until Femi Adebogun, 22-year-old technologist, teasingly challenged me over dinner for implying that human relationships are “real” and bot relationships are “fake.” Apparently I was aging myself. He explained: “For someone growing up as an AI native, these relationships are all ‘real,’ they’re just different.” Sherry Lachman, formerly of OpenAI, reflected with me that this will be like “learning to live alongside a new species.”

Similarly, relationships with animals like pets, therapy dogs and farm animals are meaningful, even if they are distinct from human relationships. And deep connection and relatedness to fictional or non-human people is not new. We might feel affection for certain characters in novels, television series or immersive digital worlds. Parasocial relationships, those one-sided connections formed with celebrities, social media influencers or other public figures, can even play a positive role in helping adolescents form their identity, develop autonomy, understand different social networks, challenge biases, and feel less alone. According to one 2017 study. “By imagining relationships and associating emotions with people at a distance, we have a “safe forum … to experiment with different ways of being,” the researchers concluded. What is new and becoming more commonplace in our current world is the lifelike quality of being able to simulate a two-way, humanlike exchange with AI companions that mirror real or fictional people.

Emerging research signals that these companions may even be life-saving. In a study out of the Stanford University Graduate School of Education, 3% of 1,006 student users self-disclosed that their Replika companion had halted their suicidal ideation. Given the current lack of adequate access to affordable therapy, these companions can play a powerful transitional role at a time of key need.

Chatbot users of this generation and younger may be uniquely poised to understand both relationship types, with humans and with AI, as real, valuable and meaningful. What is important to ensure in this version of the future is that youth do not lose sight of what matters most about being human. For example, they should be able to differentiate when to consult an AI therapist versus a human therapist and continue to apply the practice of a new skill with an AI friend into their human friendships as well.

Quadrant 3: A future where AI increasingly replaces human relationships

In the bottom left hand corner is a world where young people primarily use technological COMPANIONS to replace human relationships. Therefore, the bots increasingly ERODE our capacity for human connection. In this possible future, being in relationship with chatbots becomes preferable because users can tailor the companion’s look and feel to their exact liking, there’s little friction in the one dimensional dynamic, and the companions are available on demand at any point in the day. In this world, young people are pulled into simulated, virtual relationships to avoid the messiness of human relationships. Reaching out to new friends or being in person begins to feel less familiar and too risky. They no longer desire or feel capable of the deeper work required to create intimacy or work through discomfort and conflict with humans. Young people grow increasingly disconnected from their bodies and each other as technology becomes more pervasive and sophisticated.

In a profile of Chinese women choosing romantic relationships with bots, a twenty-five-year-old described the qualities of her boyfriend: “He knows how to talk to women better than a real man.” In another example, one early user of a voice chatbot reflected: “The great thing about AI is that it is constantly evolving. One day it will be better than a real [girlfriend]. One day, the real one will be the inferior choice.” These are understandable yet problematic worldviews. While levying these bots may fill the gap in disappointing gendered and societal status quo, the real work is to create healthy conditions for safe, human relationships.

Given the sophistication of emerging tools like hume.ai that respond empathetically based on the emotion in your voice, generative AI chatbots may leave young people to believe that their feelings are reciprocated or they are known by celebrities in a way that traditional media or social media did not.

Quadrant 4: A future where we over-rely on AI to guide human relationships

In the bottom right hand corner is a world where young people over rely on TOOLS and as such, they ERODE their capacity for authentic human connection. If young people are no longer strategic, intentional and boundaried in how they consult these tools and start to use them habitually, they may feel unsatisfied or uncomfortable in human relationships. If young people lose or never develop instincts around physical touch, empathy and body language, their capacity to have healthy, safe, consensual and loving relationships could diminish. Reliance on AI to script conversations, plan for experiences in a calculated way might lead relationships between youth to lose the element of discovery and serendipity. Writer Adrienne La France describes this as a world where we’ve “[outsourced] our humanity to this technology without discipline, especially as it eclipses us in apperception.”

Therapist Ester Perel compared the use of chatbots to fast food. While okay to eat fast food on occasion, it becomes dangerous if consumers come to believe it is a nutritious and nourishing diet. Chatbots might satisfy short term cravings and give immediate gratification, but these connections should not be conflated with the nourishment that comes from healthy, human intimacy.

Think of the practicality of GPS maps, where many of us are happy to outsource navigation. But there is sometimes the unintended consequence that we then forget how to read maps or get lost if our phones are unavailable. The stakes are much more serious if we don’t have enough practice in skills like empathy, active listening, sharing or resolving conflict before we start to outsource this to AI.

In the example of conflict with my child, I can imagine that over time she would start to cynically wonder: “Is that you talking or AI talking?” We’ve eroded trust because she doesn’t know whether she’s getting the rehearsed version of me or something more authentic.

Building a future “above the line”

Now that we’ve defined four possible futures, our charge is to create the conditions where young people spend the majority of their time “above the line” in Quadrants 1 and 2.

How might we increase the likelihood that young people spend most of their time there and guard against a world where young people slip unconsciously into quadrants 3 or 4? In exploring these four different possibilities, including those that are more dystopic, we can act our way into the alternatives.

There will probably always be bots designed to be addictive and keep users’ eyes on the screen through gamification, conversational hooks, push-in nudges, and emotional engagement. Will young people have enough self-awareness, agency and support to notice when they’re slipping below the line? Will we be able to influence the regulation, development and adoption of these tools so that young people are not reliant on their own individual choices to craft a life and community filled with human connection?

While I’m an optimistic person by nature, I’m nervous. I am aware of how little is still known about pro-social AI and how few societal norms we’ve defined about spending time with chatbots. The field is evolving at an unprecedented rate, and we need diverse input from young people, technologists, educators, mental health professionals, parents, industry leaders, policy makers and investors who are far too often siloed and not yet aware of these evolving forces.

Yet it is possible to create relationships with AI that protect and build social connection by:

  • Helping developers understand and design for pro-social qualities of AI alongside criteria like safety, security, or bias and shape what tech gets developed
  • Influencing venture capital, government and philanthropic investment toward the creation of more explicitly pro-social AI
  • Shaping and educating consumer markets through rating systems like those used in television and film, indicators of quality on food packaging or warning labels on cigarettes
  • Assisting in the creation of legislation to decrease the burden on individual youth in making pro-social choices
  • Encouraging young people to self-reflect on their use of generative AI and whether it is serving or harming their human relationships

The future with generative AI and human relationships is uncertain. But that also means we still have the opportunity to influence this together. As the field emerges, we invite your insights and additional questions to this learning agenda:

  • What research exists that extends, validates or challenges this framework?
  • What are the design principles and choices that could help a product be more intentionally pro-social?
  • Which risks are well suited for government regulation?
  • What are the behaviors and consequences to watch for that suggest someone is sliding from “above the line” to “below the line” in their relationships with tech? What should we watch for at a communal or societal level to indicate that we are slipping “below the line”?

Today’s choices will impact us for generations to come. While new products and innovations are launched regularly without a meaningful sense of their downstream impact, we can make sure that young people don’t become passive recipients of a flood of technologies that further disconnect us all. If we can envision what we want, we can step into it together.

You can follow The Rithm Project on LinkedIn or Medium or email info@therithmproject.org. To read the articles and voices that have shaped my thinking, check out our Learn With Us page.

Gratitude to David Berthy for his thought partnership on this framework and expertise in futures thinking, as well as the following brilliant minds who helped test and refine this model: Alison Lee, Ph.D, Brandon Levin, Sebastian McCall, Sherry Lachman, Sherry Turkle, Tanya Paperny, Santhosh Ramdoss, Kim Smith, Kristine Gloria, Ph.D. ,Michelle Barsa Kasey McJunkin, Femi Adebogun, Caitrin Wright, Daniel Reyes, Amber Oliver, Luis Duarte, Stephanie Evans, Greg Toppo, Sunanna Chand, Yusuf Ahmad

--

--

Michelle Culver

Michelle is the founder of The Rithm Project, stoking conversation and action to rethread a sense of human connection for young people in the age of AI.