FootBot: Creating Tension

Jenna Slusar
8 min readOct 16, 2017

--

The media tends to skew people’s political viewpoints one way or the other. This happens because both sides of the story are not always told. Our team was tasked with finding an issue that creates this two-sided tension, and then creating a chatbot that alleviates a portion of it. Thus, we chose to do so by demonstrating the other side of the story. (shown in the video below)

The chatbot, named FootBot for football, challenges the college student’s opinions on NFL players protesting during the national anthem. The bot isn’t meant to portray one side, but to let the user talk out why he or she has opinions one way or the other, and how it might relate to another scenario we selected.

Getting Started

A member of the team did some research on the different ways to create a chatbot. Then, he helped the team decide which to use. The team decided on FlowXO because it:

  • Has the most flexibility, so we can target multiple platforms
  • It’s easy to use because it is a modern UI
  • Doesn’t require much coding, so the learning curve will be relatively low

In order to have the user testing be plausible, the team chose to have the bot target college students as the user group on the FlowXO platform. With college students as a demographic, we will be using more casual language and more direct speech. We also felt that the team could easily create tension within this user group because we, as college students, understand what makes them tick. However, there are many possible ways to build tension with college students. Some include:

  • No matter what they say, oppose it (students don’t like being wrong)
  • Understand what side they take, then gently expose them to the other side
  • Ask the viewpoint at the beginning, then expose them to the other side and see if that viewpoint changes at the end

After understanding the different aspects of the bot that could bring tension, we found that these bring possible problems as well:

  • We interpret the original viewpoint in the other direction (the bot doesn’t get it)
  • People get too angry with the bot, so they don’t answer at all
  • People mess with the bot (intentionally make it confused)
  • People might not have one specific viewpoint to oppose
  • They just don’t care at all, so they again don’t answer
Controversy behind our idea for tension.

From these different aspects and problems, the team had one main idea that was thought to be the most controversial. Thus, it could bring the most tension in our chatbot (hopefully encouraging people to answer, and eliminating our problems). That idea is to build a bot to challenge the viewpoint of a college student about protesting during the national anthem in the NFL. It was not hard for the team to choose this topic because once it was brought up, it was actually difficult to think of any others. This meant that the team felt very strongly about this topic, just like our user group of college students would.

One of the ideas to create tension we came up with was to start with a completely different scenario but similar in ideology that would disarm the user. This was hard to come up with. Protesting the national anthem in support of the Black Lives Matter movement was not an easy situation to create a metaphor for delicately.

Eventually, we decided that not standing during the pledge of allegiance in protest of the state of the country was simple and vague enough to work. To create extra tension, we thought about how we could bait the user to give an opinion on this hypothetical scenario and then see if they feel the same about protesting in the NFL.

User Feedback

Before the team could get any user input we had to finalize how we wanted the flow of the conversation to go. In order to do this we created a flowchart, which was recommended by a UX Designer, Yogesh Moorjani, who believes “Creating flows helps you articulate and critique the interaction early on.”

Final Flowchart: As you can see there is a lot happening to test

The team tested this flow on other college students, which are our target audience. This was done using a pseudo prototype by having someone on the team pretend to be the bot and respond using the answers on the flowchart. The problems we ran into were that users:

  • Got annoyed with the introduction because they wanted the bot to get to the point and they didn’t feel that the bot had a personality
  • Who aren’t from the US didn’t know what the pledge of allegiance is
  • Didn’t answer the open ended questions with a yes or a no because they don’t know what side they stand on (in the middle)
  • Gave contradicting viewpoints for each of the scenarios (the team assumed they would be the same for the pledge scenario as it would for the national anthem scenario)
  • Caused tension because they didn’t like the comparison between the pledge and the national anthem scenarios
  • Thought the redundant questions we used to make sure the bot understood their viewpoint was annoying

Based off of this feedback we changed the flow for our final product. Adrian Zumbrunnen, Conversational Design at Google, stated that “Isolated messages don’t feel human”. Thus, to give the bot more personality, the team adjusted the yes/no answers to more conversational responses.

Right: Flowchart showing a lot of yes/no answers in red Left: Adjusting the response to a full sentence conversation

Another way, according to Zumbrunnen, to bring personality to a bot is to add delightful details because it sparks the user’s interest. Thus, the team added emojis in the conversation to bring a new aspect to the conversation that is upbeat and interesting because it can be interpreted in different ways.

Adding emojis that make you think to add a human-like flow to the conversation

Now that our bot has a personality, the team had to make sure tension was created even when the user didn’t have a side on either of the pledge or anthem controversies. This was done using a method that James Giangola of Google Design calls leverage context. He defines this as, “A good conversational participant keeps track of the dialog, has a memory of previous turns and of previous interactions.” Thus, to leverage context the bot will remember the user’s viewpoint in the beginning and point out whether it changes or not at the end.

This was a problem during user testing because the users were contradicting their views, but now the bot will leverage that and use it in the conversation.

Our Bot

After screening the options, as mentioned above, we decided to use FlowXO to create our chatbot. The conversation starts with a trigger, typically a hello, that gets the bots attention.

These phrases trigger the bot, initiating the conversation

From there, questions are asked of the user and the user can respond as appropriate. The order comes down to:

  1. Trigger bot
  2. Bot asks question based on input
  3. User gives input
  4. Repeat from step 2 until no more statements to reach
  5. End with a goodbye message

Now it’s time for the bot to ask a question since it’s been triggered. There are two types of questions it can ask that can be summarized as open-ended and choice. The open-ended questions are used for the user to write about his feelings or about themselves. The first question the FootBot asks is “What is your name?” In the current version of the bot, there is no name validation, so any answer will continue the conversation.

There were plans to check if the name was on the class list, but time constraints forced a smaller scope. The other open-ended questions mostly ask “How does that make you feel”, again without validation. Because of this, the bot will respond the same, no matter what the user types. With more time, it would have been more thorough to be able to parse the answers and derive a more accurate and realistic response.

The other type of question is a choice question, mostly between yes, no and I don’t know.

A choice question. Each one has a choice and value, so the two answers are “Yeah, I think I do…” with value yes, and “I have no idea.” with value no.

This is how we branch the conversation. By selecting the choice, you enter a path of dialog that is suitable. Most of the questions were designed around yes/no answers for simplicity and scope. Branching down a suitable pathway is achieved through labels and go-to’s.

A label filter. This says go to a specific label if the answer to the question was yes.

A branch of a conversation will start with a label. Most choice questions will be followed by a go-to. Each go-to contains a conditional statement that will decide where the conversation should go next.

A choice question followed by two go-to’s (one per choice) and a label to start a branch

Through all these tools, we were able to turn our script into a full conversation. Given more time, we would have liked to have seen parsing of text strings that the user inputted, i.e.: make all responses open-ended but still branch pathways. We also would have wanted to explore other chatbot platforms. This platform was slow to work with, and we heard great things about other programs.

Final Thoughts

The team felt that our chatbot was successful in starting an open dialog about the complexity of protesting Black Lives Matter in the NFL. The feedback given from demo day shows how we achieved our goal of raising tension:

  • “It was clear what stance the chatbot took on the issue”
  • “I wish it didn’t make assumptions about my opinions”

One of our goals was to disagree and challenge the opinion of the user. These responses show that our bot would always disagree and frustrate the user by extrapolating their opinion. Thus, creating tension.

Some criticisms we had were the use in emojis and diversity in responses. Our intention with emojis, as mentioned above, was to be able to give the bot a personality and not make it seem two-dimensional. While regarding not having diverse responses, we do agree with this feedback, that it makes the conversation seem less human-like. The buttons were used in the conversation only in the interest of time.

With more time, we would have liked to open up the conversation more, such as taking away the choice/button responses. This would in turn, create more tension if the bot could truly understand the viewpoint of the user and gently oppose it. Therefore, making a greater impact on the user, which was our goal.

Unlisted

--

--