Dispatch

An app that connects users and allies with specialized knowledge to better address problematic content on Twitter

Iltimas Doha
6 min readFeb 19, 2018

This past Fall semester, 80 teams comprised of Cornell-Tech and Parsons students were given design challenges by tech companies in New York City. Challenges ranged from how to create city infrastructure for electric vehicles to how to capture better wedding photos. My team was given the challenge, by Data and Society, of fighting hate speech on the Internet:

How might we help young people resist the influence of content designed to promote intolerance or spread hatred?

In the first weeks of conceptualizing a product that may help youth resist hate content, I facilitated the team through design exercises understanding key agents within our system.

Initial System Diagram

Quickly, we started to understand that the factors that played into influencing youth with hate content online had similar factors to impact a larger sphere of influence.

Updated System Diagram. Wedges are potential ways to inject ourselves into the system

We found that by constructing a counterstrategy against content aimed at youth, we could potentially also counter content attempting to “red-pill” the community at large. We moved from potential ideas specific for youth, such as identifying and addressing early risk factors, to addressing content that all social media users would be privy to. We decided to tackle the content pipeline!

The manufacturing of content designed to spread intolerance and hate begins in fringe online communities such as 4chan and white nationalist forums. It then grows and spreads through both planned and organic channels eventually finding their place on the social media feeds of young people. Collaboration between these online communities and influential provocateurs makes this kind of content difficult to track and combat as well.

Particularly insidious is hate groups’ ability to target young people at the peer level. Acting as voices of authority, influencers can saturate young people with hateful messages with few to no counterarguments available. Young people may see hateful content as rebellious, edgy, or ironic, and may engage with it without realizing when they have crossed a line. The young people who engage in these activities come from surprisingly diverse backgrounds, yet the veil of anonymity that many platforms provide allows them to shift identities and beliefs at will without social consequence.

However, there are a few ways to help young people resist the influence of this content. It may be possible to intervene at the beginning of the “content pipeline” to identify potentially problematic content and their sources and delivering this information to relevant parties as a sort of early-warning system. Another possible intervention at a later stage would be to elevate the opinions of experts in discussions around controversial or inflammatory topics. A respected authority on a given topic may be able to steer the conversation away from speculation and conspiracy.

The goal of any technological intervention in this domain should be to inspire doubt in the narratives being pushed by hate groups. These groups are powerful, and they will continue to spread their message and find workarounds to any sort of censorship or filter put in place. “Strategic silence” and no-platforming to silence them are nearly impossible to coordinate in the era of decentralized media. Instead, by providing young people with additional viewpoints, context, and perspective when they see hateful content, we can cause them to question their own participation in its spread. Reduced youth participation may, in turn, lessen hate groups’ momentum and decrease their prominence in mainstream media and culture.

Understanding the delicate system we had to balance we dove into sketches. And sketches. And more sketches.

As a team we starting to imagine what product might look like in broad strokes. We took three minutes to construct a sort of story board of what out products did, in one post-it we laid the framework, the scene before our product is released; in another post-it we described succinctly what the product would accomplish; and in the last we showed what the scene looked like now that that product was released.

Fictitious headlines are solely to communicate ideas unambiguously to team members and do not reflect our views; three minutes sketches go by fast!

After sketching and voting for the strongest idea twice we landed on our ultimate sketch, “crowdsourcing facts”. The idea of bringing in expert opinions and counterpoints aligned with what Data & Society told us about effective techniques for countering hateful content.

We then spent a 24 hours in a hackathon/sprint to come up with a concept and design a prototype. Starting with the initial idea of crowdsourcing facts, we devised a Chrome extension that allowed users to flag tweets (Twitter was the choice of platform for it’s flexible API) that would be sent to our product, where it was then sent to a pool of vetted experts, that would return a counter narrative for the user to inject back into Twitter threads.

Product Architecture V1

Many Twitter users recognize problematic content on their network, but they may lack the knowledge to confidently engage in dialogue. At the same time, experts with specialized knowledge lack the visibility and bandwidth to address all misleading/problematic content across the Twitter universe. We can connect these two groups and create an information loop, which starts by leveraging the concern of Twitter users to create a crowdsourced aggregation of problematic content. This aggregation can be filtered and ranked by categories and delivered to experts who contribute their knowledge, adding to a bank of counter-arguments, statistics, sources, and links. This information bank can then be used to help willing Twitter users combat problematic content and to further educate interested users about the topic.

Example of a hateful tweet
Mock-up of how product would serve up counter narratives to user

Once we had demo’d a mock-up of our tool, we started to reach out to potential users and began surveying to see how we could iterate and validate the product and its features. Our initial survey indicated that a higher percentage of Twitter users are active on mobile which caused us to pivot from chrome extension/website to a mobile app. In addition, our user tests indicated that some of the features we initially put in the mock user feed (e.g. stats for flagged tweets) may be confusing and unnecessary and should be removed for the next round of user testing.

Updated Product Architecture with transition from extension to iOS app *”Firefighter” is internal jargon for those who flag tweets

Finally we started development of the iOS app. First we distilled our product into a single core product loop.

This product arms online activists with more effective talking points to use when engaging with hate content on Twitter, which in turn dilutes the impact of this content for more passive users who see it in their feed. Therefore the loop begins when a “firefighter” (i.e. a zealous Twitter user who has the app installed) browses their own feed looking for hateful content. When they see this content, they use our app to flag it, and it is pushed to a central database of flagged content. For each tweet, the app creates a discussion thread for experts where they can discuss their positions and distill talking points (if they deem the tweet worthy of discussion). Those talking points are then packaged and returned to the firefighter, who uses them to engage directly with the hateful content.

This is what that final core product loop looks like in practice:

2x Playback speed is suggested

I believe that these first steps our team has taken is unique and direct way with combating hate content. We acknowledge some potential risks in figuring out our expert vetting process and scaleability, but these are challenges we are excited to tackle in the next few months.

Many thanks to danah, Joan, Matt, Khemi, Mikaela, Devon, Steve, Justin, Juliette, Michael, and Ally

--

--

Iltimas Doha

EYEBEAM Resident ‘14-’15 ⁠ Parsons’ Design+Technology BFA ‘15-’19 ⁠ Eid and Chill 🕋