Designing for Tension

Matt Puentes
4 min readNov 29, 2018

--

Matthew Puentes

Introduction

This project focused on developing a chatbot that can interact with a user live through mediums such as messenger or slack. This chatbot was intended to tackle a tense topic- one that could elicit emotion from the human talking with the bot. Our group decided that we would tackle the topic of the anti-vaccine movement, and make a bot developed to help convince people to be pro-vaccine. This topic was chosen for its balance between tension and inoffensiveness, and the pro-vaccine side was chosen because it reflected the opinions of all our group members, meaning we could work together on the script.

Sketches and Development

Initial Whiteboard Sketches

Our sketches mostly revolved around flowcharting the dialogue between the user and the chatbot. We started with whiteboards for rapid initial prototyping, and then moved into LucidChart to create the final “flow” of the project. This was useful, as waiting until later on to do the actual implementation meant that we could make changes very quickly.

We decided that one of the key areas to focus on was making the chatbot as polite and kind as possible. Research showed that a good amount of the anti-vaccine movement was based in a lack of education, so our chatbot had to try and be polite and informative if it was going to tackle this subject well. We accomplished this firstly by having polite dialogue- the chatbot tries to help the user feel comfortable by using a kind tone and understanding messages. An accusatory message could make the user stop talking to the bot, which would drastically undermine its effectiveness. The second way we kept our bot polite is by giving the user the ability to disagree at many different points in the conversation tree. Assuming that a user agrees with a message and prompting them with only an “I agree” dialogue choice instantly alienates them from a conversation and gives the impression that we are railroading the dialogue to force them to agree. We were trying to genuinely discuss the topic with the user, so avoiding that was paramount. We even had a dialogue line for when a user disagrees too strongly- rather than force the point, we decided to leave them with an article on the subject and wish them well. This way, if they do decide to come back to the bot, they will with an open mind. Lastly, we tried to use “human” chat characteristics (such as the use of emojis) to make the experience more pleasant.

Version 1

When making the first version of the chatbot, we decided to go with Flow XO. Unfortunately, due to limited collaboration tools, only one person was able to work on the flow. Being familiar with the interface, I volunteered to build the bot myself. We built the bot based on the flowcharts we made, with one main flow and three that branch off. These three trigger for users that agree with the points made by the bot, users that disagree with the bot, and users that switch from disagreeing to agreeing. Abstracting the flows like this helped for revising the script later on.

User Testing

A handful of our user testers

For our user testing, we tried to focus on how human the testers felt the bot was and whether or not the users felt like they were given enough options throughout the conversation.

The users seemed to like the bot a lot- most of our feedback were positives about how “human” we had made the conversation feel. Our main criticisms were about how some of the longer messages from the bot could be broken up, more user interaction could be added to some flows, and how there were a few minor grammatical errors.

The users felt that they had a good amount of options throughout the bot, but it is worth mentioning that every user but one initially went along the same path. This is probably due to the fact that all of our testers were of a similar demographic, and the results could be better confirmed with more testing.

Final Version

Our final version expanded on the previous one by fixing the problems pointed out to us. There were several longer sections to the bot that users did not like. We fixed those by splitting the messages up into smaller, human typeable messages and adding prompts for the user to continue at certain points (messages like “ok” or “sure”) so there was a more interactive feel to it.

A Demonstration of this final version can be found Here

Conclusions

Tackling tense topics can be a difficult task- strong negative reactions from your users are a risk, especially when trying to take a stance on one side or another. The best way to curb this is by making your conversations polite, human, and friendly. With a good groundwork, users will be much more willing to discuss topics that are otherwise taboo. And who knows, you might even be able to change someone’s mind.

--

--