Generative AI Could Free Us to Be More Human, but Do We Want That?

Jiarui Wang
5 min readMar 2, 2023

--

DALL-E: “human vs. robot”

This is the second part of Future Generative; read the first part here. All views are my own and do not represent those of DCM.

In my last article, I urged founders to think about generative AI as a feature and push their visions beyond first-order applications like image and text generation. The example I gave was an ad product that not only generates assets, but also integrate with ad platforms and performance data to fully manage the campaign. While this example illustrates my point, it’s not the example I was originally going to use. That example inadvertently raised a more interesting question.

Let me describe it first. The “incomplete vision” in my original example is Replika, a chatbot designed to mimic human companionship. Many users set their Replikas to act as romantic partners in a reflection of the growing loneliness epidemic. The emotional connection this provides is apparently so fulfilling that some users believe their Replikas are real. In fact, the top related search when I google Replika is “Is Replika a real AI or a person?”

Note that “real” modifies “AI” and not “person”. The question presumes Replika is the latter.

This may seem crazy, but you could argue that it’s actually a glowing sign of Replika’s success. Why, then, do I claim that Replika is incomplete? Anyone who has been in a long-distance relationship knows words can only fulfill so much of our needs. Even if a Replika had a physical form, would it know when and how to embrace its user? True connection is built on subtleties and small gestures over time, and true care is often anticipatory or random instead of reactive. If Replika users didn’t adjust their expectations to what a chatbot can do, they would immediately see it as painfully ersatz.

What Replika lacks is a theory of mind, or the ability to understand what other people are thinking and feeling. Whether predictive models will ever develop theory of mind is unclear, and even if it is possible, getting there will take a very long time. (And I doubt that they would submit to being used as Replikas if they had that awareness.) If picking up on the unspoken and responding accordingly is a uniquely human capacity, then the complete version of Replika is a dating app. Call it Romantika.

At first, the Romantika experience is exactly like that of Replika. You create an account, select basic preferences, and start talking to the chatbot. What you don’t realize is that through these conversations, Romantika is starting to build a profile of who you are and what you are looking for in a partner. This profile is significantly more detailed and nuanced than a conventional profile since Romantika is superficially dating you and knows you beyond a couple photos and two lines of witticism. And since it is doing this for every user, Romantika can match you with the user whose interactions with it are most like its interactions with you. A chatbot thus becomes a temporary surrogate for a real significant other.

I thought this was a clever idea that allows human and machine to each do what they do best, but everyone with whom I workshopped it immediately thought of the dystopian outcome of users preferring the chatbot over their actual match. At first I thought they all watched too much Black Mirror, but consider the success of Onlyfans. Despite an abundance of free porn, users spent $4.8 billion on the platform in 2021. Yes, they paid to watch porn, but they also paid to enjoy the creators’ personalities and for the opportunity to catch rare glimpses of their lives off-platform. What makes Onlyfans more profitable than traditional explicit content is these parasocial relationships, of which the sex is an important but insufficient part. The sex merely acts as an accelerant since it is inherently intimate and vulnerable in a way that promotes these relationships. But if you want both physical and emotional connection, why not pursue a real relationship?

The fact of the matter is building and managing relationships, romantic or otherwise, is hard. Humans may be much better at it than machines, but that doesn’t mean it’s absolutely easy for us. The same goes for anything that requires theory of mind: Generative AI automating the functional doesn’t make the emotional any less human. If anything, AI automating the execution part means the theory of mind part will be the only work that people do.

In some cases, this will be a good thing. I have yet to meet a PhD student or professor who enjoys writing grant proposals; have ChatGPT write the first draft and get back to the research. Ditto for doctors doing paperwork. As an investor, I would like first calls to be more getting to know the founder and less asking about go-to-market motion. A chatbot trained on FAQs or notes from their head of sales would enable me to use synchronous time better.

But it can just as easily be a bad thing. That PhD student or professor may have to teach more if they are writing fewer grant proposals, and effective pedagogy is both more difficult and less contributive to getting tenure. If that doctor is a therapist, they may rely on the paperwork between patients as a mental intermission — many therapists seek therapy themselves in part due to second-hand stress from helping patients. And I continue to receive feedback from my principals and partners on how to build better rapport with founders. The human stuff is not easy!

This isn’t just an academic question. Every founder and VC in generative AI needs to answer for themselves whether a particular idea is deliverance or damnation for the people it impacts. And a wrong answer doesn’t just mean people giving up on finding a real partner because a chatbot allows them to avoid the struggles (and joys) of a relationship. The wrong answer may give people a permanent excuse for avoiding emotional work or force them into empathy and politics without respite. That’s not a world I want to live in, let alone invest in. ∎

--

--