Why Can’t Twitter Solve Its Troll Problem?

In its desire to be an open communication platform, Twitter has devolved into a playground for trolls. Despite countless cases of rampant abuse, the company seems unwilling, or unable, to foster a sense of decency and safety for its users. Proponents of Twitter argue that this is a fundamental paradox of free speech, that it’s the Internet, that clamping down on hateful language amounts to censorship, that eliminating trolls is impossible or will backfire. I respectfully disagree; free speech never should give license to insult or threaten another human without consequences. Further, it is fully within Twitter’s power to create tools and policies that save it from being a toxic environment to the very people that can ensure its success.

While Twitter has openly announced a desire to mitigate abuse, the solutions seem deeply flawed. Filtering options might make the user experience more pleasant, but just because I can’t see threats doesn’t mean they aren’t there. I’d personally rather know if I’m being harassed, especially if there’s even a slight possibility of a threat moving off the platform and into the real world.

Similarly, banning a handful of high-profile users does little to stop the onslaughts of abuse perpetrated upon less well-known users. Women, especially, are victimized excessively with language that, were it used in the “real” world, would be grounds for a restraining order, if not jail time.

To deal with trolls, the very culture of Twitter needs to evolve. But this is not as difficult, nor technologically challenging, as it may sound. Features designed to aid and improve reporting abuse and moderating content can have a massive, recursive effect in reducing the amount of abuse currently being perpetrated. For all I know, Twitter developers may already be considering some of these ideas, or even actively working on them. But here’s my top 10 (plus one) list of troll-stopping features:

  1. Twitter already has a UX flow for reporting abusive Tweets, but notably the form lacks a text field to give additional context. Provide one so users have the option of providing more details.
  2. The end of the flow simply states they’ll investigate. Instead, provide a case number to reference during future correspondences.
  3. Strengthen the terms of use to provide more concrete guidelines expressly forbidding harassment and hate speech, as well as enumerating the consequences of abusing other users.
  4. Pay a large staff to handle the flow of logged complaints. Provide direct customer service phone numbers and/or IM chats for complainants to contact somebody at the company. Include processes for appealing unfair bans.
  5. If it doesn’t exist already, create an internal ranking system for complaints. Increased numbers logged against a specific tweet or user elevates those complaints above others lodged against tweets/users with less flags.
  6. After a certain threshold of complaints is exceeded for a specific tweet/user, automatically trigger a temporary ban of the harasser while investigations take place. Consider hell banning tactics, so that abusers aren’t necessarily aware that they’re under investigation.
  7. After a certain, even higher threshold is exceeded, a full suspension of the account automatically goes into effect until the situation/context can be properly vetted.
  8. Add multipliers to the ranking system. Complaints logged against users who follow, or have followed, other banned/reported users are prioritized above users who do not follow other abusers. This will discourage the pile-ons that result when a provocateur encourages their followers to abuse another user (e.g. Milo Yiannopoulos vs. Leslie Jones).
  9. To prevent abuse of the abuse-reporting system itself (such as users trying to get another user banned unfairly), create a system that tracks usage of the reporting system. If a User A reports an incident that is determined not to be a TOS violation, log that User A has incorrectly entered a complaint. Notify User A that the reported tweet does not violate TOS for reasons x, y, z. Add similar prioritization/banning tactics from the steps above to people who abuse the abuse-reporting system. Notify User A that abuse of the abuse-reporting feature is *also* a violation of the TOS.
  10. Monitor and utilize other signatures (emails, phone numbers, IP addresses) to prevent the creation of new accounts by people who have had their accounts banned. Reward accounts verified via multiple means (email, phone, driver’s license) and target less verified accounts for quicker suspension in the event of it being flagged for abusive behavior.
  11. Be transparent about suspensions. Publicly display examples of breaches of TOS that resulted in bans. Keep an open and public dialog with the user base regarding violations.

Twitter currently does not seem to have systems in place to deal with the massive amount of harassment taking place on its platform. With people and corporations abandoning the platform en masse, the company’s future doesn’t look too bright. The platform does provide a ton of public good, historically aiding oppressed communities and providing a fearless way of communicating directly with each other, as well as people in power. But if it doesn’t quickly and decisively deal with its troll problem – and it’s not as hard a task as its management may think – Twitter simply will not survive.