Design for Tension

Why gun control?

On October 1, 2017 a gunman began fire on a crowd of concert-goers from his hotel bedroom in the Mandalay Bay hotel in Las Vegas, Nevada. Over 500 people were injured, and 58 dead making the 2017 Las Vegas shooting the largest mass shooting in America to date. As Seth Meyers so aptly put it, “congress always says [the aftermath of a mass shooting] is not the right time to talk about [gun control]…but it would be so much more honest if [congress] would just admit that [their] plan is to never talk about it” (Late Night With Seth Meyers 2017). Gun control has been highly debated conversation in the United States over the past couple years, especially as the number of mass shootings, and the amount of deaths occurring has been increasing. There is inherently a conversation that needs to be had about gun regulations, which is why we decided to build a chatbot that engages users on different ideas behind gun control, and gun regulations.

The Demo Video for the chatbot can be found at:

The Final Product

The final result of the Design for Tension project was a chatbot designed to talk about gun control. Appropriately named Colt, this chatbot was given the persona of a young college student with the intentions of speaking to members of the NRA about policies, regulations, and opinions on guns. Using, the chatbot was integrated into Facebook Messenger and could be accessed through liking the bot’s page: “Cs379 gun control bot”. Some of the topics covered by Colt are background checks, the second amendment, regulations on ammunitions and gun types, why people own guns, and proper gun safety.

The chatbot was designed to accept more than just complex answers. Colt’s layout within allows him to accept multiple buzzwords for various questions, allowing the user to go beyond just yes and no questions. There was also randomized responses and specific junctions in the conversation, so that a general part of the conversation can lead down one of many different paths.

The Conversation — User Intents & Key User Inputs

We envisioned our chatbot to be able to talk on a vast range of topics in gun control, so that it could best sympathize to the person interacting with it. We first began by breaking up gun control into four topics: why does one need a gun, what types of gun (ie: assault weapons, magazine capacity) civilians should be able to own, regulations on purchasing firearms, and regulation on how civilians take care of their firearms. We felt that these four broad categories would encompass many common arguments commonly used when discussing gun control with other people.

The first topic, why does one need a gun, was split itself into three subcategories: for fun, for hunting, and for protection. We found that this topic lends itself towards people who already own guns, and therefore we knew that in discussing this section, our bot would have to sympathize with their opinions, and try to present the opposing opinions in an unbiased manner. The main arguments for these, are that guns can be used for fun, hunting, or protection; however, there should be background checks, or limits on purchasing enacted, because not everyone uses guns for these, essentially harmless, purposes.

The next topic, what types of gun, is one that is slightly more polarizing. One interesting piece of research we found, is that the 1994 Assault Weapons Ban didn’t actually lower crime rates, but it did lower the number of assault weapons in the United States. Here we tried to reason that people are simply not comfortable around assault weapons, whether they are in the hands of trained military officers, or a civilian who can afford one. This topic also covers high capacity magazines, which we argued were unnecessary for any of the above reasons to own a gun, and serve no real purpose in society.

Our bot also discusses the process of purchasing firearms, and how it is just too easy in some cases. This is where the argument “guns don’t kill people, people kill people” can be twisted, into showing how even if people kill people, they do it with guns. Our bot discusses the constitutionality of owning guns, and the right to bear arms itself. The bot links to a video which shows what guns in the 1700’s would look like in an office shooting today, and begs the question “if guns have evolved, why haven’t our laws?” The topic of background checks, or lack thereof, is also programmed into the bot, where it discusses prohibiting the sale of firearms to those with a history of mental health disorders, or those on the terrorist watch lists. While these parameters may seem unfair, we’d like to push that owning a gun should be a privilege not a right.

Our final category, how people take care of guns, is fairly straightforward. According to this CNN article 5790 children were injured by firearms annually between 2012 and 2014. Some of these are very preventable if the parent properly stores their firearm in a gun safe, and ensure that the weapon’s safety is on. These simple but important steps hopefully elicit a response from a gun owner on showing how people can be bad gun owners, and at the least encourage that person -if they own guns- to be smart about it.

Building The Personality

One of the first things we considered when developing our chatbot was the persona it would take on. We knew that there is an intimate relationship between people’s willingness to engage with chatbots depending on how life-like they appear to be, but also recognize that there is a point of diminishing return where if the bot appears too lifelike, the user may be more guarded with their words, and be less willing to have a real conversation about gun control. As Adrian Zumbrunnen discusses, a chatbot’s ability to take on a real personality makes all the difference when designing for user engagement, so in the case of our bot, one that was to engage someone on a difficult conversation, we knew this would be crucial.

Our chatbot is designed to engage people who are less inclined to agree with gun regulations, so in developing the personality of our bot we kept the kind of person they would most be inclined to speak to close in mind. After some basic research on members of the National Rifle Association (NRA), we narrowed the scope of who we would be engaging and decided that our target audience were young men ages 18–25. Next, we named our bot, and picked out an icon so the user felt like they were having a real conversation with someone. We gave our bot a lifelike name, Colt, thinking it felt unassuming enough that anyone would want to engage with him. We then chose a character from the television show, South Park as the icon to step back from the human-like characteristics we had thus far developing into the persona. By using an animated character as the icon, we hoped the user would feel just removed enough from the bot that they would be willing to have a real conversation with him.

Darryl Weathers, a South Park character, is the face of the Chatbot, Colt

Darryl Weathers, the inspiration for our chatbot’s personality; someone we assumed our users could either relate with or would be open to talking about gun control with.

Pre-Programming Testing

With the personality of our chatbot set, the outline of our conversation placed, and the research on gun control completed, we were ready to begin pre-programming testing. Similar to Yogesh Moorjani in Designing Chatbots, we tested using the “play assistant” method. Our tests involved holding conversations with two separate participants, in which one of the members of our design team acted as our chatbot and responded to the participants’ messages accordingly. The purpose of these tests was to explore the directions in which people might take conversation with the chatbot. Below are excerpts from the two test conversations.

These are early stage tests on Slack, using classmates as test subjects.
These are sample user tests through test messages to gather insight on potential user inputs.

From these test conversations, we learned several things about the ways in which people interacted with our chatbot.

  • It is important to ask about not only whether a user owns a gun, but also whether family members own one
  • Even for “yes” or “no” questions, users may respond in a variety of ways
  • It is easier to predict a user’s response if the chatbot asks a question than if the chatbot simply provides a declarative statement

With these points in mind, we moved on to the next step of the chatbot design process: the design flow.

Our Design Flow

An initial sketch of what we thought our design flow would look like.

Based on the results of our test conversations in the pre-programming testing step described above, we were able to develop rough flow diagrams for our conversation. As described by Yogesh Moorjani in “Designing Chatbots,” design flows are useful for evaluating the ways in which a chatbot interacts with users before the bot is actually finalized. Above is an early draft of the design flow for the first few interactions of the chatbot conversation. In this flow, we attempted to record paths for every possible response to the output of the chatbot. In some cases, these responses could be limited to a simple “yes” or “no”; in other cases, our chatbot would have to rely on word recognition to determine the most appropriate path from open-ended responses. Our design flow appear fairly complex, but this complexity was necessary for capturing the nuances of a conversation about a topic as tense as gun control.

The Script

In developing our chatbot, our main focus was on the conversation it would be having with the users. We wanted our users to feel like they were actually talking to someone real, and therefore wanted our chatbot responses to seem lifelike. We did this through two main lenses, the first was through the use of Maxim of Quantity: providing the user with as much information as possible in order to further the conversation. So, each time our chatbot reached a point in the conversation that could potentially lead to a lull, we provided some form of information that would push the reader either further into the topic they’re discussing, or to a different topic our chatbot was capable of discussing. Furthermore, we discussed different ways we could weave different conversation points into a single path. For example, along the topic of what someone’s interest in guns would be, we were able to anticipate that if someone were to say something along the lines of, “I have a gun so I can protect my family in the case of an intruder”, we could lead the user through a series of other questions that end in another conversation topic of gun safety and storage. Anticipating responses was not only useful in figuring out how different conversation flows could lead to each other, but was also useful in using’s word detection feature. Users could then have real conversations with our chatbot rather than sticking to quick responses, like “yes” or “no” taking the next step into the chatbot feeling lifelike and engaging.

A glimpse of what our actual design flow looked like.

Strengths, Weaknesses, and Improvements

Reflecting upon the finished product and process, it was clear to see that there are a few weaknesses with Colt, the chatbot. One issue that the team found was that the was a little bit buggy. At points during the conversation, it would sometimes add extraneous messages that were parts of other flows. During some of our demos, it would also send messages twice at random times. Another weakness that was seen during user testing was that many people didn’t understand that their responses could be more conversational. A lot of what was given to the bot was yes and no answers and only rarely did people use his ability to interpret more complex messages and maintain the conversation.

However, despite these weaknesses Colt is still a well-functioning chatbot who has many strengths as well. As mentioned previously, the randomized responses are something that allow users to not get stuck in the same “paths” during multiple conversations. After a greeting from Colt, the user will be able to follow the conversation in many different directions. Within these paths, Colt will produce links to articles, youtube videos, and also contact information of state legislature throughout the conversation. This interactive quality will allow users to go beyond the chatbot and see things that you would look at in a real conversation with another human.

With more time, there are plenty of improvements that could be made in order to improve functionality and create more realistic user interactions. To start, there would probably be more success on a different platform than At first, the team attempted to use FlowXO, but ran into limitations in the amount of flows that a free account could use on a bot and also found it to be more difficult than working with TalkBot. From a user standpoint, there are a few fundamental changes that could have been added to the implementation. The first would have been some sort of delay and animation into the messages produced by Colt. While the user needs to take the time to type and send a response, Colt’s reactions were immediate. Talkbot has features that will allow these delays in the messages and would have needed to be before each response. Had there been a little more time on the project, this is something that easily could have been added. A second change would be the length of the messages. Some of the statistics that Colt provided to users were lengthy and made some of the messages difficult to read. This could have been solved by breaking up the messages into two or three shorter messages which is less intimidating to the user.


All in all, designing for tense topics can create a lot of challenges that you often don’t face in other situations. For starters, it is extremely difficult to avoid adding your own biases to the chatbot. While they are allowed to take stances, it is important to be able to consider all sides of an argument and all of the various stances that a user might have. To be able to consider all of the arguments and anticipate what a response may be is extremely difficult. On top of that, creating that human element seems like it could be easy, but a computer is so calculated that it takes a lot more time and attention to detail to make a chatbot realistic. Colt handles the difficult topic of gun control well and tries to tie current events into its responses as he talks to users that primarily support the ownership and usage of guns. It was a unique challenge to us as “creators” of the chatbot and eye-opening to see how a computer can feel like a real user is on the other side having an actual conversation.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.