Design for Tension
Building a controversial chatbot
For this design sprint, we were tasked with creating a chatbot that “tackles a tense topic”, as stated in the instructions. At first, my group wanted to design our bot to talk to flat-earthers about why the Earth isn’t flat.
After talking to the professor, we came to the conclusion that this topic was not serious enough for this assignment. Going to a STEM school, it’ll probably be hard to find flat-earthers. We decided to change our topic to anti-vaxxers. It might also be hard to find people against vaccines at a science-driven school, but this is more realistic than finding flat-earthers.
Some of the things we decided were most important for our bot was to have it be as human-like as possible. In order to do this, we decided to include emojis in what it said. Another thing we had to consider was whether to make the interactions multiple choice style or not. We figured that typing in the responses could turn out to be messy. For example, we could program the bot to catch key words like yes, no, nah, yeah, or maybe (and all capitalization variations included). If the bot asked, “do you believe vaccines are bad?”, the user could respond, “yes, because no child should have a virus injected into them”. See the bolded words? Those are opposing key words that lead down different paths. If more than one path is triggered, it could lead to undefined behavior. With Flow XO, however, it might only listen to the first key word detected. We didn’t test that out, but we decided not to take the chance.
During the design process, we decided to have our bot take the stance of someone that agrees with vaccination, and it argues against anti-vaxxers. While drafting the script, we decided it would make the most sense to have three separate conversation paths: one for if the user agrees with the bot, one for if the user completely disagrees, and one for if the user initially disagrees, but is persuaded in the end.
We figured that this is the best way to map out our dialogue, because it wouldn’t make any sense if these all lead to the same place. After changing and tweaking minor things, we had a finalized script that was ready for demo day. For the production of the bot, I was responsible for developing the script.
After the technical part of constructing the bot, it was finally ready for test day. Most of the user feedback we received was pretty positive: “very human-like”, “very comfortable to talk to”, “likes the devil’s advocate part”, etc.
Some of the feedback wasn’t positive though. For example, a lot of users said the bot’s statements were too “rapid fire”, leaving little time to read before another statement was displayed. When another statement is displayed, the current statement gets pushed up, meaning that it’s hard to read when the bot is speaking quickly.
Interestingly, four out of the five users that tested our bot all went down identical paths. They all agreed with the bot, and when it asked if they wanted to know more, they all said yes. Either these users were wary of offending the bot, or the bot was very persuasive.
Overall, I believe our bot was very persuasive and well-planned out. A lot of thought went into how to bot was going to act; i.e. how human-like to make it, where and when to use emojis, etc. For the most part, the bot was very well-received. I think this design sprint was an excellent experience user testing. The past assignments dealt with presenting data in some way, like re-designing a website or making graphs. This assignment was different because we were not really displaying data, but rather trying to get people to look at views they might not agree with. While I didn’t get the opportunity to test other bots, I’m sure that they all raised some interesting question that was definitely fun to challenge. A demo video can be found here.