Twitter Cyber Harassment, Deconstructed

Source: Naturalist UK

Overview

In this piece, we’ll talk about pertinent background of (cyber) harassment on Twitter. Then, we’ll ask the thought provoking questions that define the hazy battlefield that is ‘fixing Twitter’. Lastly, we’ll discuss potential solutions — good and bad alike. The goal of this piece is to not convince you, the reader, of any one particular course of action, but rather, to provide fundamental analysis that can facilitate conversation about how to solve the issue of Twitter cyber harassment going forward.

First, some due diligence:

Legal Precedence

In the US, acts of harassment that don’t qualify as Civil Rights violations are relegated to the jurisdiction of state governments and their respective regional laws. Examples of Civil Rights violations would be [1]acts of discriminatory harassment that target individuals based on race, color, national origin, sex, disability, or religion”, or [2]actions taken by an individual that create a hostile environment at school. That is, it is sufficiently serious that it interferes with or limits a student’s ability to participate in or benefit from the services, activities, or opportunities offered by a school.”

Notable Example:

Of late, there have been a few mainstream cases relating to mass cyber harassment on the platform. One of the most prominent cases, is that of Leslie Jones (actress and cast member of Saturday Night Live), whose phone/iCloud account was hacked, having nude pictures of herself leaked online.

So, where does this leave us?

Typically, corporate entities whose platforms are used to employ these attacks have been cooperative with helping prosecute or halt these particularly aggressive individuals that fall under these categories. However, as disconcerting as this may be, telling someone that ‘I wish you would kill yourself’ or ‘I wish you would die’ isn’t technically a threat in the eyes of the law. This fact by no means minimizes the emotional trauma on the victim, but it’s critical to understand that legally, there is no basis for the authorities to step in to prosecute. The threshold for crossing from ill-wishing to threat requires both intent and plausibility to execute.

“social media platforms may find the situations much more manageable by addressing content visibility for victims.”

Fast/easy onboarding and low personal accountability uphold freedom of speech, but yield negative externalities. Twitter takes only 24 seconds to sign up for. The tradeoff of fast signup lends itself towards anonymity and high account turnover when compared to other social media platforms. Users (content consumers, producers, and hybrids alike) unconsciously buy into and accept this ‘social contract’ when choosing to utilize the platform, whether they realize it or not. It is without question that the success of Twitter is still reliant on retaining users, but the billion dollar question then becomes, ‘at what point will restricted speech cause users to leave the platform?’ As a platform whose success was built on the mantra of free speech, the ethos of limited restraints is something that Twitter presumably wants to uphold.

The flaws of commonly proposed solutions:

  1. Simply put, IP Banning won’t work in practice. Most people have dynamic IP addresses, meaning that with a router restart or simple command line script, they could be up and running again instantly. The IP ban proposition operates under the assumption that harassers use Twitter exclusively in their own home, as the only Twitter user in said household, and on their own network . This idea begins to quickly unravel when you loop in cellular network Twitter users, or anybody that uses Twitter on a network that they don’t own/manage themselves (public WiFi, coffeeshop, workplace, school, etc). It doesn’t make much sense to ban an entire college, coffee shop, or company from being able to use Twitter because of a single abuser on the network (an abuser who may not even be a member/student/employee of that network)! Besides, there are always VPN’s, and if the trolls are as willing as they seem to be, they’ll find ways to get around any sort of IP ban. There are simply too many points of fault for this solution to adequately prevent harassers from utilizing the platform post-ban without also restricting access to other users who aren’t in any way associated with the incident. To go another step further down the rabbit hole, iCloud/Google Play account synchronization is simply too invasive to be used in the name of curbing harassment. Even if implemented, the app-account solution wouldn’t account for people who use Twitter in mobile browser (unless all device MAC addresses were passed through to Twitter, which would just be absurdly invasive).
  2. Twitter account/tweet rating system — The idea of having benevolent users upvoting each others’ content sounds great in theory, but what prevents trolls/abusers from upvoting each other in the name of brigading such a system, or downvoting content/people that they knew their victim would want to see? Or, more directly, simply downvoting their victims’ content, thereby silencing and oppressing the victim further? Adamant harassers/trolls will game a system unless there is a way to uniquely identify them to prevent them from creating new accounts, or blocking content that fits a certain criteria. In its current iteration, Twitter allows for blocking of any users with two clicks. This, of course, becomes a scaling issue for users who have hordes of people harassing them, where individually navigating to block each and every one of them becomes unreasonable. Viewing/filtering content that only passes a certain love/retweet threshold seems a tad controlling and presents a modicum of other (mainly negative) externalities to justify using. Twitter already has liking/retweets as a mechanism for increasing content visibility of content that its communities see fit. Besides, one of Twitter’s unique strengths is its raw feed, where Facebook and Instagram have both gone the way of algorithmic sorting, to much bemoaning of the latter’s respective communities. Despite how easy it is to block users, this solution doesn’t necessarily scale well when victims are being harassed by hundreds or thousands of people — nor does it actually prevent the person from experiencing the harassment, it simply prevents that one harasser’s single account from being used in the future.
  3. Connecting with a rep at Twitter to report harassment is something that could potentially ameliorate the impacts of the harassment but isn’t a true solution to the existence of trolls and harassment on the platform, for two reasons. First, it’s not scalable, resource-wise or temporally. The number of representatives that would be required to staff these calls would quickly balloon. Second, is that it doesn’t in any way inhibit cases of future harassment. Unless these representatives were assigned to moderate someone’s @mentions, or to police bad individuals who are continually harassing, it wouldn’t seem to be effective (and this efficiency is minimized even further by extending the impact of labor intensiveness). If the underlying technical challenges spoken about prior still exist, what exactly is a Twitter representative left to suggest or do for the user? It’s obvious that speaking with a real person offers a baseline level of relief and understanding, but if this can in no way lead to some sort of actionable outcome, then it may not make sense for Twitter to do this as a company. In the end, it’s just a bandaid fix that addresses the symptoms rather than the actual problem.

ACTIONABLE SOLUTIONS:

1. ‘Premium/confirmed/verified’ Twitter accounts

In a world where people can opt-in to link more unique personal info during account signup, it could prevent abusers from remaking accounts while still allowing first time users to have a simple/fast onboarding process. Linking something as simple as a single phone number for 2 factor authentication on signup, where duplicate ‘premium accounts’ with the same phone number aren’t allowed, would go a long way towards combating much of this harassment. As with harassment on most platforms, a small number of people represent a disproportionately large amount of the harassment, both in quantity and severity. People who are feeling harassed could then choose to display tweets directed towards them that are by ‘premium users’ only.

2. Rallying the community to catalyze social change

Although the most idealistic solution of all, this addresses the fundamental cause of online harassment (people), and therefore has the largest hypothetical potential impact. Being able to identify online harassment is a critical first step towards minimizing it. By calling out cases of online harassment, users can help reporting systems more effectively remove toxic content. By being more cognizant of what one says online, it could cut down on unintended harassment by individuals without malicious intentions. In the age of the “fake news” dilemma where people are far and away unable to discern fact from fiction, I find setting personal feelings aside to self-incriminate members of their own philosophy to be idealistic at best, and destructively tribalistic at its worst.

3. Machine learning harassment-buster (internal or external)

Machine learning models trained to help flag content that is most likely to be harassment. If it’s confident enough, it should have the power to delete it. This would include manually tagging posts that are confirmed to be ‘harassment’ (or using the last tweet(s) that were directed at a person that consequently blocked them). There are, without question, tweets that require no context to deem unallowable for Twitter. Although Twitter may have something for this already working under the hood that I’m not familiar with, I do believe that there is much more work to be done in regards to expanding these (potentially existing) models.

4. Shadow Banning / Shadow Blocking

The easiest solution of them all — don’t show a user when they’ve been blocked by a particular user. This is somewhat inspired by Reddit’s shadowban system, where users are allowed to persistently post comments or threads as usual, but the platform will not display them to other users. If harassers aren’t deterred by making a new account, then shadow banning will, at the very least — delay, or ideally — prevent, a harasser’s knowledge of any sort of silencing, thereby helping to protect the victim.

Closing, and food for thought:

If trolls are something that exist unilaterally across other online media platforms similar to Twitter, is Twitter disproportionately burdened to solve this complex, ever-evolving issue? As an addendum to the prior question, should Twitter forcibly create a harassment-free platform without being under the force of existing market pressures? Despite being under seemingly constant financial pressure to continue growing, can Twitter successfully continue to grow by marketing more effectively to new users or solving the issues of those who have left, or are leaving, the platform? I don’t think reducing harassment in any form is necessarily a partisan issue, but it cannot be emphasized enough that solutions solving for online harassment straddle the boundary of limiting speech and content on the web in general, which open another huge can of worms — something Twitter has taken a staunch stance against in the past, with very rare exception.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Nicholas Walsh

Nicholas Walsh

274 Followers

Developer Relations at Amazon Web Services. Formerly MLH, Datmo, Wolfram Research.❤️ esports, AI/ML, and dunkin donuts coffee.