Meet Alice, the chatbot that helps you talk about death

Authors: Uttam Kumaran, Sierra Magnotta, Mitch Petrimoulx, Anushikha Sharma

Anushikha Sharma
Bucknell HCI
9 min readOct 15, 2017

--

Introduction

This design sprint was titled ‘Design For Tension’ and our team decided to challenge the concept of ‘Tension’ by thinking of uncommon ways in which it might manifest itself. One of the ideas that particularly stood out was a chatbot that helps people who are suddenly faced with their mortality, come to terms with the fact that they might be dying. We were not too specific about a timeline or an age group that we wanted to target and created a general solution in response to the following question:

How might we build a Facebook Bot to assist those facing death by allowing them an opportunity to communicate through some of their issues?

We have a Facebook page title ‘Making Peace With Death’ and using Flow.xo we linked its chat interface with our bot.

The scope of building this chatbot was very broad and hence, we used some assumptions:

  • The user interacting with the bot is the one facing sudden death
  • They have briefly known/heard about the purpose of this chatbot before coming to use the platform

Our chatbot has a persona named ‘Alice’ and can field three types of conversations: regrets in life, afterlife and family concerns.

Figure 1: Introductory conversation with Alice
Figure 2: Alice sympathizes and offers to talk more with the user
Figure 3: Alice offers the possible conversation trajectories that she can help with

We knew that this was a sensitive topic and was going to be challenging, especially because we could not test our chat bot with users who fit the target audience for this project. However, we do believe that with extensive testing with the people around us and a desire for a comprehensive but conversational approach, we have managed to build something that meets our goals for the project.

Demo video showing some potential interactions with the chatbot

Brainstorming

We began by brainstorming, where we researched and found three major concerns that many terminally ill people have: regrets in their life, their family, and the afterlife. We continued to research online to find the most positively accepted answers to these questions through forums and articles on the topic, then used them as our goals for the conversation with our bot. We discovered early on that it is quite awkward to jump from “Hi, how are you today?” immediately to “Do you have any regrets about your life?”, so we had to play around with making our bot seem more realistic in its conversation skills. We accomplished this by pretending that one of us was the bot and talking to another team member to analyze what felt more natural to us.

Figure 4: Brainstorming the various flows in our conversation

We also looked into having another set of conversations based around the more practical aspects of death. For example, the bot could help a person write a will and make sure power of attorney was correctly laid out. We initially pursued this idea but concluded that it would drastically increase the scope of our project (something that Yogesh Moorjani warns about in his paper “Designing chatbots”) and scraped the idea in the brainstorming phase.

User Testing

After brainstorming conversations and working out a general flow of our chatbot, we began asking classmates to talk to our bot and provide their feedback. From this feedback, we found a few key criticisms. Two of our testers, Allan and Jack, wanted the bot’s language to be more natural and conversational, and to trim the number of words that the per response from the bot.

Figure 5: In our early user testing, a member of our group posed as the bot and manually sent responses, following the conversation flow draft we had created

Another tester, Stephen, noted that the bot should only respond after the user is done typing since the user might use multiple text boxes to share their point. In general, we discovered that our chatbot tended to provide a lot of information that the user didn’t necessarily want, so we needed to go through and add more questions to make the chat bot’s responses more natural and personalized to the specific conversation. Users overall noted that our bot was very wordy, so we worked on cutting down the bot’s answers to seem more natural and less ‘preachy.’

Figure 6: User feedback showed that our bot tended to give more information than the user wanted

Our user feedback also showed that our bot did not give enough opportunity for the user to really think about their situation and come to terms with it, so we also added questions that would allow the user to speak freely and reflect deeply.

Figure 7: Allowing the user to share some personal experiences to make the conversation feel more real

The screenshot above exemplifies how we reflected on our feedback and ensured that our bot allowed the user to speak about their experiences instead of simply giving advice. We wanted the user to build a relationship and trust the bot which is key to navigating a topic such as death.

Figure 8: The bot refers the user to some resources containing more in-depth information

We made a choice to ignore the advice given by Adrian Zumbrunnen in his paper “Technical and social challenges of conversational design” which suggested that isolated messages don’t feel human and you shouldn’t repeat topics. We determined that a person’s comfort with the bot will increase over time, and they may want to discuss a specific topic more in-depth. By allowing a person to return to the same conversation, we allowed them to expand on their ideas further.

Our Process

Flow XO was our tool of choice and it was challenging to work with. Initially, we had tested out a simple bot using the tool, but our actual chatbot was much more complicated and needed many different flows. Flow XO is a platform for sales and marketing and thus is built to require no technical knowledge. However, it is not very intuitive as an application. As we started to build each task and trigger, we realized that the naming of our tasks on the building platform was going to become extremely important. This particularly came in handy, when we had to keep scrolling up and down in order to duplicate certain tasks using different filters for a new flow.

A positive feature of Flow XO is that it has a built in delay when responding to the user, which helps make the conversation feel more natural and allow the user to think about the response they just put into the bot. This follows the advice of Adrian Zumbrunnen where he notes that having a response that is delay but not past 10 seconds still feels “instantaneous”, but it doesn’t flood the user with info right after they type a response.

It would be ideal if we could build separate flows that can be inserted as needed inside the main flow but can also run individually as their own flows. If this was an option, we could not figure it out in the time we spent interacting with this platform. In replacement, we used labels to branch into the different parts of the conversation. This made the organization of each of our tasks easier to maneuver.

Figure 9: Using different Labels to branch into ‘AfterLife’, ‘Regret’ or‘Family’

Our final result was a chatbot that is triggered when someone uses phrases like ‘hi’, ‘hello’ or ‘I want to’. It then introduces itself and asks for name, age and reason behind using the chatbot.

Figure 10: Alice asks the user their name, age and their reasons for interacting with the bot

Assuming that the person is chatting because they are faced with their own mortality, the bot offers condolences and asks if they would like to chat more. If the user says no, then the bot reassures them by saying that it’s alright to not want to talk right now and that the bot will be here in case they decide to come back. If the user says yes, then the bot begins by telling them that at any point they can leave the conversation using ‘quit’ or ‘end’ if they’re feeling uncomfortable or overwhelmed. It then leads them to the three possible modes of conversation.

Figure 11: If the user says they don’t want to talk, Alice reminds them that she’ll be here for the future
Figure 12: If the user says they do want to talk, Alice guides them to the possible topics of conversation

When the user picks an option, the associated questions are triggered by a label specific to that type of conversation. At the end of every full round of conversation, the bot again offers to talk about the other remaining topics with the user.

Figure 13: Once the user is finished talking about regrets, Alice offers the remaining two possible topics

Strengths and Weaknesses of Our Design

Our prototype had a few key strengths that we were proud of. The first was that we tried to give some personality to ‘Alice’ so the user would feel more comfortable conversing with our chatbot. Additionally, we wanted it to allow the user to play an equal role in the conversation. This meant asking open-ended questions and looking for key responses from the user. Lastly, another strength was that we had three different topics to discuss that we made clear early in the conversation. This followed the guidelines given by Yogesh Moorjani in his article mentioned above. We defined clear user intents to put some constraints on the scope of our project. These strengths were confirmed by our user feedback. Some weaknesses were that the bot did not incorporate the user’s reply in its answers. Additionally, there should be more than only three topics that the bot could talk about.

Future Work

For future iterations of the chatbot, we could add more aspects of support to assist the user with their death. Topics such as children, finance, and friendships could be incredibly important. Additionally, it would be great if the bot could parse and search for keywords to tailor it’s responses better providing a unique experience. Especially when discussing a topic with such magnitude, it would be an incredible improvement if we could provide this to our users. Also, because it is built on the Facebook platform it would be great if we could automatically pull some data from the user’s account to create a more tailored experience right off the bat.

Conclusion

Through this project, we were able to create a functional Facebook chatbot that would talk users through aspects of their mortality. As this is such a large issue to tackle, our chatbot focused on only three issues regarding the topic: regrets, family, and the afterlife. Through user testing, we were able to improve our chatbot’s design to allow the user to be more reflective. We also improved the flow of the conversations so that it felt more natural to talk to our chatbot. Without user testing, we would not have noticed these issues in our design; we needed people outside the design team to actually interact with the bot to help us find errors in the flow and get feedback on the interactions with the bot. Overall, our bot shows strengths in its conversational abilities and ability to allow the user to reflect on these difficult topics. While our bot is not able to tackle all aspects of mortality, we are happy with the progress we have made in creating a usable chatbot that can work through some of these issues with its users.

--

--

Anushikha Sharma
Bucknell HCI

Software engineer, travel enthusiast, intersectional feminist, and lover of cake