Introducing Chatpool: Promoting Civilized, Constructive Conversation in a Politically Polarized Society

Inyoung Choi
Dec 14, 2017 · 13 min read

In our digital age, we notice an increasing political polarization fueled by the ideological echo chambers on social media. Internet users can easily isolate themselves into silos by choosing to only network with select social circles that often times simply reflect their own political views.

  • Stanford University Professor of Political Science Shanto Iyengar notes in a study that while only around 5 percent of Republicans and Democrats reported that they would “[feel] ‘displeased’ if their son or daughter married outside their political party” in 1960, nearly 50 percent of Republicans and over 30 percent of Democrats “felt somewhat or very unhappy” about inter-party marriage in 2010.
  • According to the Pew Research Center, around half of Democrats and Republicans say the other party makes them “afraid”.
  • A study by Stanford University Professor of Economics Matthew Gentzkow notes that the Democrat/ Republican’s relative favorability to its own party has grown by over 50 percent since 1990.

This phenomenon of increasing political polarization is evident in the online space.

We wanted to create an online tool that could address the increasing political polarization and hostility. By creating a space where journalists and their respective readers can come together to share both sides of a topic, we hoped to cultivate empathy across ideological spectrums.

To summarize, our team member Marnette identifies our problem as the following:

To be a well-rounded, informed citizen it’s important to see viewpoints that are different from your own. But looking at comments on articles and on social media, it seems that people are not interested in hearing both sides of the argument. Sifting through platforms, especially when it comes to controversial topics, is not a very pleasant experience. People call each other names and there’s lots of profanities. The level of discourse is pretty horrifying. It’s clear that online conversations don’t encourage a desire to actually want to understand other people’s perspectives.

Here’s our main task:

How do we help people have conversations around controversial topics in a civilized way and how do we get people to empathize with each other’s points of view?


Stage 1: Brainstorming Ideas

We started our project by brainstorming ideas on what a civilized dialogue in an online space would entail. Here are some questions we asked ourselves:

  • How do we get journalists with opposing views to use this space in the first place? How can we incentivize them to participate?
  • What characteristics would a good moderator possess? How do we select the moderator?
  • How can we use computational tools to help the moderator facilitate the debate?
  • How can we use computational tools to help each participant of the discussion clearly communicate their ideas and emotions, despite the fact that they are in an online space? How can we make the space as similar to an in-person discussion?
  • How can we make the discussion informative for the audience? How can we make the discussion entertaining? How do we make it accessible?
  • What does it mean to have empathy towards the other view? How do we check if our platform meets this goal of cultivating understanding?

Our team exchanged a mix of ideas. Our diverse backgrounds and interests led us to many thought-provoking discussions. Half of us are undergraduates in college and half of us are seasoned professionals in media. We came from backgrounds in both tech and journalism. We grew up in different geographic regions.

Here’s a few snapshots of our initial scramble for ideas:


Step 2: Inspiration

We researched other models that brought people of opposing views together in hopes of building an enlightening and civil conversation. Our team member Lisa Rossi found a few sources of inspiration for our team to benchmark.

Here’s what Lisa found:

1) A journalists’ role could be reimagined as a facilitator. A notable influence for us came from a Online News Association Meetup in October at Wired Magazine. There, moderator and John S. Knight Journalism Fellowships Fellow Andre Natta, asked panelists how journalists could engage audiences so they are collaborators in the process, not just consumers. Yvonne Leow, founder of The Hello Project, which invited people with opposing views to have a video chat with each other, said she learned to set expectations before a conversations unfolds. From these two concepts, we designed a world where two news consumers could talk through their differences in a moderated online environment, with expectations for civility outlined at the outset. In that space, the act of journalism — interviewing and gathering information — was happening between news users as a conversation unfolded, as opposed to in a formal interview.

2) A different kind of debate could be centered around understanding an opposing view instead of simply prevailing over it. As our team discussions evolved, we realized building empathy and understanding between two sides was a primary goal. We sought influences outside of journalism. We listened to the marriage counseling podcast called “Where Should We Begin?” by Esther Perel, who asked people to practice “integrating the experience of the other” as they construct their own argument. This makes the conversation less polarizing. Inspired by that approach, we gave the moderator prompts within the app to teach participants how to summarize the arguments of their opponent.

3) Deep listening builds empathy. We still needed more help to equip our moderator with prompts to coach participants towards empathy when the conversation became heated. We read the teachings of Thich Nhat Hanh, a Vietnamese Buddhist monk. We used his ideas to populate the app with tips for de-amplifying conflict, which included asking questions of the other person before firing back with an opposing opinion (a method of deep listening) and listening without judging or reacting, a technique that is also often taught and practiced by the best journalistic interviewers.


Step 3: Prototyping and User Testing

We started prototyping by considering the following:

  • How can we use computational tools to facilitate the discussion?
  • Should the moderator be a bot?
  • Should the interface be real time?

At first, we thought a bot that facilitates the discussion according to a fixed set of prompts would prove most effective. Here’s an overview of how we envisioned the process:

We tested this model out by inviting two Stanford undergraduates who respectively identified as Conservative and Liberal to participate in a discussion on Gun Control. We simulated a bot by preparing a fixed set of prompts and precoordinated times to post the prompts. For example, we would state “User 1, Please share your opinion” before giving User 1 five minutes to respond.

Here is how the user test went:

Note: The two users in this test both knew of each other’s identity and knew each other in person. Our original prototype question was “Is there a higher likelihood of empathy if two people with differing political views have an existing connection?”

Following the user test, we asked each participant for their feedback. Here are a few of their suggestions:

  • One user wrote they “Agreed with no part of their (the opponent’s) side”
  • Both users indicated the conversation did not enhance their perspective on this topic
  • They sought more time for discussion

As the feedback suggests, the conversation model did not effectively facilitate a civilized, constructive dialogue. The following were our takeaways from this test:

Time limit:

  • Do we set a time limit before one can type an answer (to allow to process/empathize instead of immediate rebuttal)?

The time limit seemed too forced / did not allow each participant to flexibly contribute to the dialogue with constructive ideas. At this point, we considered changing our platform from real-time discussion to a chatroom that each participant could revisit at a time of their choice.

Bot dialogue:

  • The users easily dismissed the bot’s dialogue. Should we use a human moderator instead?
  • Should the moderator ask specific questions regarding the user’s viewpoints? (Instead of simply stating “User 1- you may freely ask any questions”)

At heated points of the discussion, the participants neglected the Moderator Bot’s guidelines and continued to proceed with their argument. We realized that a human moderator could more flexibly create a discussion that best simulates an in-person, “human” conversation.

Content:

  • Do we initially start with articles? (Readers can participate by submitting articles they want to put to debate)

We realized that simply suggesting a broad topic like “gun control” failed to establish an effective foundation for constructive discussion. We decided to add computational tools for the moderator / users to use links to articles for constructive points of discussion.

Familiarity:

  • Do we add more components to the user introduction? (e.g. mutual hobbies)

We realized that the participants felt too disconnected from each other to reconcile ideological differences. To solve this issue, we decided to use a few ice breakers. For example, we asked each user to share a story about his/her family so that the two users could “humanize” each other despite engaging in a conversation in a virtual, online environment. By establishing a mutual ground, we could cultivate familiarity between the two users so that they would be more willing to understand each other’s views. In addition, the more “human” the discussion became, the more interesting it would be for the audience.

Rewarding empathy:

  • Do we incentivize the use of certain components of discussion? (e.g. citing reputable sources)
  • In what ways do we measure the outcome apart from checking if they can restate the opponent’s opinion?

We started to reframe our understanding of what “empathy” meant in a civilized discussion. What was our ultimate objective? Instead of aiming to reconcile ideological differences so that both users can reach a consensus on a topic, we decided to aim for both users to clearly and accurately demonstrate their understanding and acknowledgement of the opposing view. Our goal was to create a civilized, constructive environment for discussion that would accurately present both views. We would incentivize each user to participate in a “civilized, constructive” discussion by enabling the audience to evaluate each participant according to their level of empathy following the discussion. The audience would vote which of the two participants were more empathetic / constructive in the discussion.

The audience would benefit from this outcome by seeing both sides of an argument in a space. The users (who we envisioned to be journalists), could benefit from this process by communicating their own ideas to an audience outside of their primary readership, since readers of their opponent journalist would visit the discussion.


Step 4: Reconstructing our Platform

With these considerations in mind, we recreated our platform. Here are a few key changes we made to the platform

  • Moderator — Instead of a bot, we decided to make a selection of guidelines for the human moderator. For each stage of the discussion, the human moderator would have a selection of prompts we created to help them facilitate the discussion. These prompts would include icebreakers that would help each user establish familiarity with his/her opponent.
  • User — We wanted to create a more “human” experience for the user. We decided to create an “emoji” button so that the user could express how they felt at each point of the discussion. This would prevent miscommunication that would arise from the online space being a text-only space where users may have trouble understanding how the opponent truly felt.
  • Cultivating Empathy — We set the primary goal of our discussion to confirm each user’s understanding of each other’s opinion. By the end of the discussion, we would ask them to summarize each other’s opinion and confirm whether they felt the opponent accurately represented their own viewpoint.

Here’s a video that demonstrates our new idea:

Given this new idea, our team member Leilani coded an initial model for our platform.

Here’s Leilani’s explanation and demo of our app:

From a technology standpoint, it seems that the features needed for the product already exist in other popular applications like Facebook Messenger. However, it was important to have full control over each part of the technology to structure the right kind of space where it feels intimate enough for users to comfortably share their thoughts, but formal enough for users to maintain respect for each other and for the moderator. Furthermore, we wanted to emphasize empathy by requiring users to attach an emoji reflecting their current emotion to the message they send. I used the React/Redux framework to create this web application, and stored user and conversation entities into a realtime database hosted on Firebase.


Final Stage: Our Final User Test

We created Chatpool, an online space for civilized and constructive discussion. We envisioned it to mirror a chat in a carpool ride . It’s difficult when the other person disagrees with you, but you want to continue the discussion until the ride is over!

We made a few final revisions to our app and gave it one final round of user testing. For the user test, we had two Stanford undergraduates discuss their viewpoints on ethnic-themed dorms.

Here’s a step by step walkthrough of our app:

First, our app helps us FIND both sides of an argument. We used computational tools to survey users for their political ideologies and backgrounds. We then used this to match users up — we wanted users who shared some level of similarity (e.g. age, geographic background, values, interests) but different opinions regarding the topic of debate.We’re trying to find the right match — it’s like a dating app, for political discussion.

Now it’s time to HEAR both sides. In this first stage of the discussion, the moderator uses one of the prompts for the Introduction and Icebreakers to allow the users to introduce themselves to the other participant. The goal of this part of the discussion is to establish common grounds between the users. The moderator also clarifies rules /guidelines for the discussion to create a respectful environment for civilized dialogue.

Note: The users in this user experience test did not know of each other’s identity. Each user was given a designated position (Pro/Con) of the given topic and asked to argue in their assigned perspective.

In this portion of the discussion, we asked each user to state their opinion. The moderator could flexibly use the prompts to guide a constructive discussion. When the debate became too heated, the moderator could click on the “help” button for moderation advice from Thich Nhat Hanh.

We then confirmed each user’s understanding of the opposing viewpoint by asking them to summarize the other user’s rationale.

Let’s SHARE both sides! After the discussion came to an end, we disclosed the chat log to the public. Reading the script, the public could vote who they thought was a more empathetic, constructive contributor to the discussion. This evaluation component incentivizes participants to participate in a constructive discussion to the best of their ability.

You can try the app out by clicking this link.

One moderator and two participants can enter the discussion with the same code (the moderator can use any combination of numbers/letters for the discussion code).


Reflections and thoughts:

This is our final version of our app, but we’ve also received much constructive feedback from our user tests. Here are a few suggestions we plan to consider:

  • How do we envision journalists to use this in a press room?

As of right now, we see Chatpool as a platform on social media that could enable journalists with different readerships to come together to share their ideas with their own readers/ readers of their opponent. It’s like a Twitter war (which often goes viral) that is more civilized, clear and constructive. It could also be a way for local journalists to engage in dialogue with national newspapers. We would love suggestions on how we could make Chatpool a more effective, applicable tool for journalists.

  • How do we ensure the moderator is not biased?

The selection of a good moderator is key to this platform. We think it would be most effective if the audience could suggest effective moderators (journalists with a strong online presence) to moderate. Journalists (participants of the discussion) who have been frequently voted to be “empathetic” could also be nominated as a moderator.

We’re always looking for more feedback. Civilized, constructive discussion is incredibly important in today’s polarized digital age, and we want to do our part to help solve the problem. Please feel free to tell us what you think!

About our Team

Marnette Federis: Marnette is a multimedia journalist and currently works as a program administrator for the Stanford Journalism Program.

Leilani Reyes: Leilani is a senior at Stanford studying computer science. She is interested in emerging technology like computational journalism and blockchain.

Lisa Rossi: Lisa is a 2017–2018 John S. Knight Journalism Fellow at Stanford. She is a storytelling coach at the Des Moines Register.

Inyoung Choi: Inyoung is a sophomore at Stanford. She is interested in exploring issues around online media such as misinformation and freedom of speech.

    Inyoung Choi

    Written by

    Undergrad @ Stanford. Lover of People, Ideas, Ideas on People, People with Ideas, Waffles and Hikes.