Photo by Alexander Sinn on Unsplash

Love AI.ctually

Design for Tension. Emory CS 485: HCI

Noah Okada
8 min readMar 31, 2022

--

“man is by nature a social animal.” — Aristotle

As social beings, we all crave connection. The experience of developing deep and meaningful relationships is an innate aspect of the human experience. However in this increasingly digitized world many struggle to find true “connection.”

In this design sprint, my team attempted to design a chatbot to help alleviate this tension of finding connection in the age of online dating.

THE TENSION

In recent years online dating has become ubiquitous. Proportional to this rise in online dating has been a rise in the number of awkward conversations that result from the difficulty of adequately communicating emotions over text.

Although online dating platforms are expected to be a means of finding romantic partnerships, there is tension when seeking to find true connections online. This tension spans from the awkwardness of failed pickup lines to the confusion of sharing your heart online.

THE IDEA

What if we build a flirtBot🤖😏

To address this tension we sought to develop a conversational agent that could train individuals to develop interpersonal connections. We envisioned an AI that could help people learn to date, chat, and flirt. This chatbot could help users practice making connections online by asking deep interpersonal questions. We decided that the best implementation of this would be a flirtatious chatbot, or as we like to call it a: flirtbot.

THE SCIENCE

In order to design a solution that truly solved this problem our team had to do some digging into the science of interpersonal connection. To do this we looked to three sources for inspiration: psychology literature, romantic media, and online dating platforms.

THE PSYCHOLOGY:

In searching for answers about how to create interpersonal closeness we investigated several sources to understand how humans interact with each other when developing relationships. These resources showcased the importance of identifying several variables within a conversation that could lead to a successful relationship. We used the framework provided by Aaron et al 1997 to generate a list of questions that could help identify these variables.

THE MEDIA:

We then used popular media sources to refine our understanding of how these types of questions are presented in the real world. We looked into resources such as the widely popularized 36 Questions to Fall in Love and ice-breaker games such as We’re Not Really Strangers.

These platforms helped us to understand how the broader public perceived the act of asking questions to get to know each other.

We’re Not Really Stranger Game

THE ONLINE PLATFORM:

We then took inspiration from popular online dating platforms such as Tinder and Bumble to think about how our users might interact with our chatbot. We used the design and interactivity of these platforms to inspire our own design direction.

Tinder and Bumble

THE USER

After establishing a framework for how to ask questions to build genuine connections, we set out to understand our users. By examining the average age of users on online dating apps, and the general reviews about the “online dating experiences” we determined who are users would be. We established that our users were 18–30 year old singles who were nervous about the prospects of building relationships via text. We assumed that these users were looking to find genuine connection by learning to develop relationships characterized by trust.

User Profile

THE IMPLEMENTATION

Finally, to build our chatbot we used FlowXO to create an interactive flirtbot. This chatbot was built using an iterative design process that consisted of several stages of user testing.

The chatbot was designed to maintain three different levels of conversational intimacy:

  • Intimacy 1: General Greetings

In this stage of the conversation, the bot sought to get to know the user’s name and interests. This allowed the chatbot to break the first conversational barrier and begin building rapport with the user.

  • Intimacy 2: Casual Conversation

In this stage of the conversation, the bot attempted to break the ice by asking broadly targeted questions. This allowed the chatbot to begin segmenting the questions into targetted questions/answers that truly matched the user.

Intimacy 3: Deep Questions

In the final stage, the bot would use the responses from previous questions to inform its decision to ask a question from one of the “36 Questions To Fall In Love.” By asking these questions the bot would be able to engage the user in a deep and meaningful conversation that could build a trusting relationship.

The layout of different flows

User Test Stage 1: Alpha

In our initial user testing, we asked three individuals to engage in interaction with an early model of the chatbot. This experiment was designed to assess how users would respond to the concept of “flirting” with a conversational agent while also seeking to understand the user’s general behavioral patterns.

User Testing Stage 1

To understand how our users felt about the product we asked them to respond to a series of questions that assessed the chatbot experience across the domains of: interpersonal closeness (connection), naturalistic conversation, and overall efficacy.

User Feedback:

  • Interpersonal Closeness:

Our users felt that the chatbot effectively delivered interesting questions that engaged them in self reflction. However they felt that the bot did not flexibly guide the conversation or truly address them as individuals.

  • Naturalistic Conversation:

Our users felt that the transitions between greetings to questions were smooth and natural. However, they felt that the texting style and tone was “too romantic” or “too quircky” and did not feel realistic.

  • Overall Efficacy:

Our users indicated that the multiple-choice questions were much easier to use compared to the typed response questions. They also indicated that the chatbot experience felt too lengthy.

Overall they provided positive feedback about the experience of flirting with the chatbot, however, they identified key areas where the chatbot needed to be improved.

User Test Stage 2: Beta

Using the user’s feedback we modified the chatbot to improve the flirting experience. The key areas of improvement were: personalization, tone, and length.

To improve personalization we added functions for the chatbot to use the user’s name more freely in responses. We expected that using the user’s name would make them feel validated and heard when conversing with the chatbot. Furthermore, we added components for the chatbot to reflect on the user’s response before asking the next question.

To improve tone, we added new elements such as GIFs and memes in order to create fun and less lengthy responses to questions.

To improve the overall length of the experience we mixed the questions so that users would engage with deeper questions sooner during the flow. This allowed us to reduce the time the user spent in the casual conversation phase of the experience.

User Testing Stage 2

We followed the same procedure as the first user testing experiment with a cohort of three new users to determine how the users felt about the chatbot experience.

User Feedback:

Overall the users felt that this version chatbot was very personal. They especially enjoyed how the user made them feel validated and comfortable.

One user described:

I like how the bot laughed at my jokes and responded with jokes. It made me feel like we were really connecting.

Furthermore, the negative feedback we received in this user testing stage showcased some of the efficacy of the bot's ability to establish closeness. The users complained that there was not enough follow-up or connection between the questions the chatbot asked. Their responses indicated that they wanted more time/connection with our chatbot. Although this revealed areas for improvement in the structure of the conversation, it also showed that the bot was effectively performing its role of helping users to identify what they like in text-based interpersonal connections.

This feedback helped to validate many of the improvements we had made. It allowed us to confidently demo this version of the chatbot during the demo day in class.

THE DEMO

Finally, we showcased this chatbot during the demo day presentation in our CS 485 class. To demo this chatbot we deployed it using a web-based and Facebook messenger-based version.

We also created a demo video to describe the steps we took in developing this chatbot.

CONCLUSION

It can be difficult to find genuine connection in this fast-paced and technology-driven world. To solve these problems many companies have developed online dating platforms that attempt to connect a wide range of individuals. However, within these platforms is a tension that separates individuals through the awkwardness and insecurities of text-based conversation.

With this design sprint, my team attempted to address this tension by creating a chatbot that promoted interpersonal connection. We leveraged tools from psychology, media, and pop culture to create a conversational agent that flirted with a user.

By leveraging these tools and conducting robust user testing we were able to learn more about the desire for connection that everyone shares. Although a chatbot can never supplement genuine human connection, it can provide a platform to aid individuals in reflecting on their emotional availability.

Technologies like these that are deeply informed by the human experience help us to seriously reflect on the impact that Human-Computer Interaction can have on society.

Acknowledgments

I would like to thank my team members Chloe Luo, Matt Zhang, and Carrie Gu for their contributions to this project.

--

--