Twitter Cyber Harassment, Deconstructed

Nicholas Walsh
14 min readFeb 22, 2017

--

“Birds of a feather flock together”

Source: Naturalist UK

Overview

In this piece, we’ll talk about pertinent background of (cyber) harassment on Twitter. Then, we’ll ask the thought provoking questions that define the hazy battlefield that is ‘fixing Twitter’. Lastly, we’ll discuss potential solutions — good and bad alike. The goal of this piece is to not convince you, the reader, of any one particular course of action, but rather, to provide fundamental analysis that can facilitate conversation about how to solve the issue of Twitter cyber harassment going forward.

Conventional harassment is grounded in its sociological roots, but when given a digital engine for execution at scale, it becomes a more complex beast entirely. Cyber harassment is a huge issue, and it has been for quite some time now. It spans countless online mediums, and has been a hot topic of discussion among both speech activists and harassment victims alike. Most agree that something needs to be done to lessen both the severity and quantity of online abuse, but the question is — how?

First, some due diligence:

Legal Precedence

In the US, acts of harassment that don’t qualify as Civil Rights violations are relegated to the jurisdiction of state governments and their respective regional laws. Examples of Civil Rights violations would be [1]acts of discriminatory harassment that target individuals based on race, color, national origin, sex, disability, or religion”, or [2]actions taken by an individual that create a hostile environment at school. That is, it is sufficiently serious that it interferes with or limits a student’s ability to participate in or benefit from the services, activities, or opportunities offered by a school.”

Each State can pass their own anti-bullying laws as they see fit — having control over the relevant offenses, respective punishments, and methods of regulation. It’s critical to note that of these state level anti-bullying laws, they are specifically directed towards school districts/institutions of learning, which of course, don’t begin to cover the entire spectrum of harassment that exists beyond adolescence.

(Pertinent anti-bullying laws must) “Outline the range of detrimental effects bullying has on students, including impacts on student learning, school safety, student engagement, and the school environment” — stopbullying.gov

As a result, cyber harassment is analyzed and persecuted in accordance with traditional civic law. What this means is that for any action to have legal ramifications, it needs to constitute a violation of one of these existing laws.

There are many cases of harassers making blatant and heinous threats against peoples’ lives, which are most definitely illegal. It’s important to note, that a recent supreme court case asserted that online threats of violence were not illegal unless they consisted of a certain threshold of intent or potential. Still, in the eyes of the law, the line is blurred, and very rarely are cases pursued.

On the other hand, hacking (unauthorized access) into websites/phones and leaking data (illegal), or doxxing people [releasing personal information against a target’s will] (illegal), are both issues that are much more black and white*.
*note: a pending SCoTUS trial involving doxxing.

*note2: All legal discussion here is in the context of the United States. Laws are subject to their respective countries/regions.

Notable Example:

Of late, there have been a few mainstream cases relating to mass cyber harassment on the platform. One of the most prominent cases, is that of Leslie Jones (actress and cast member of Saturday Night Live), whose phone/iCloud account was hacked, having nude pictures of herself leaked online.

However, this instance was unique as she was not attacked by the typically-faceless Twitter mob. Milo Yiannapolous, a highly contentious political activist, leveraged his followers to brigade hate towards her in the form of racial slurs and race-based insults. This was deemed unacceptable by Twitter, resulting in his termination from the platform.

Yiannapolous’ actions, from an objective standpoint, violated multiple statues in Twitter’s terms of service.

Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin,…”

Harassment: You may not incite or engage in the targeted abuse or harassment of others. Some of the factors that we may consider when evaluating abusive behavior include:…
if the reported account is inciting others to harass another account;”

Sadly, there are innumerable others whose cases will not achieve mainstream attention, due to the lack of an overly aggressive public leader. Even so, the banning of Yiannapolous did not solve the issue of similarly-minded people from harassing Jones on the platform.

So, where does this leave us?

Typically, corporate entities whose platforms are used to employ these attacks have been cooperative with helping prosecute or halt these particularly aggressive individuals that fall under these categories. However, as disconcerting as this may be, telling someone that ‘I wish you would kill yourself’ or ‘I wish you would die’ isn’t technically a threat in the eyes of the law. This fact by no means minimizes the emotional trauma on the victim, but it’s critical to understand that legally, there is no basis for the authorities to step in to prosecute. The threshold for crossing from ill-wishing to threat requires both intent and plausibility to execute.

Unlike traditional harassment, however, it is apparent that individuals who do rely on an online social media presence are susceptible to, and are receiving, a disproportionately large amount of harassment. Specifically, harassment that is exponentially more severe in both quantity and quality than most conventional alternatives (with the obvious exclusion of physical violence/abuse). Where the laws leave a gap, the role of content moderation (and potentially, curation) then falls upon the platform itself. In this case, it’s up to Twitter to decide what content should and shouldn’t be allowed between the bounds of innocuous and illegal.

Many argue that these laws leave a gap where current policies don’t adequately deal with the specific outcomes of cyber harassment, but that’s a larger issue entirely that needs to be solved at the public policy level. Hell, our own laws for conventional harassment aren’t even close to perfect. Pervasive sexual harassment is still a topic that government and society still struggle to handle adequately, as there are a seemingly endless number of blurred lines in the chain of reporting, investigation, and prosecution. One thing is for sure though, people are able to get away with much more online than they can in person. To this end, maybe the idea of new laws for online policing could have some sort of premise. *But, (and this is a HUGE ‘but’), it requires extremely thorough understanding of the landscape before passing blanketed online social policies.

Each individual harasser may only contribute a relatively low amount of harassment that comprises a small piece of any one victim’s overall abuse, but the internet allows the number of harassers one victim faces to multiply exponentially in comparison to conventional harassment. Thus, social media platforms may find the situations much more manageable by addressing content visibility for victims, rather than to try to mitigate/police the content’s presence/creation at large. It’s important to understand that the gray area of concern lies mainly in the users that aren’t ubiquitously harassing others, but are otherwise abiding by Terms of Service/social norms on the platform. The argument for victim-side alterations extends further from an optimization perspective, it’s simply easier to give people tools to help moderate the content they wish to see (that may want to be viewed by others), than to redefine what content is acceptable for the platform, a severe stance that cannot be made lightly.

“social media platforms may find the situations much more manageable by addressing content visibility for victims.”

Fast/easy onboarding and low personal accountability uphold freedom of speech, but yield negative externalities. Twitter takes only 24 seconds to sign up for. The tradeoff of fast signup lends itself towards anonymity and high account turnover when compared to other social media platforms. Users (content consumers, producers, and hybrids alike) unconsciously buy into and accept this ‘social contract’ when choosing to utilize the platform, whether they realize it or not. It is without question that the success of Twitter is still reliant on retaining users, but the billion dollar question then becomes, ‘at what point will restricted speech cause users to leave the platform?’ As a platform whose success was built on the mantra of free speech, the ethos of limited restraints is something that Twitter presumably wants to uphold.

Considering Twitter’s seemingly boundless community, for its current state of free speech, is adding the most conservative of restrictions curbing only the most inflammatory/heinous offenders necessarily a bad thing? Altering these pillars of the platform in the name of curbing harassment could yield many unintended negative externalities. however, I believe that there are other more specific solutions that can be taken to curb harassment, both with or without fundamentally changing Twitter’s platform.

It becomes difficult to argue that the imperative is on Twitter to rush into major alterations of their platform to prevent cases of harassment, especially when there may be potential solutions with higher specificity and less volatile solvency.

The flaws of commonly proposed solutions:

  1. Simply put, IP Banning won’t work in practice. Most people have dynamic IP addresses, meaning that with a router restart or simple command line script, they could be up and running again instantly. The IP ban proposition operates under the assumption that harassers use Twitter exclusively in their own home, as the only Twitter user in said household, and on their own network . This idea begins to quickly unravel when you loop in cellular network Twitter users, or anybody that uses Twitter on a network that they don’t own/manage themselves (public WiFi, coffeeshop, workplace, school, etc). It doesn’t make much sense to ban an entire college, coffee shop, or company from being able to use Twitter because of a single abuser on the network (an abuser who may not even be a member/student/employee of that network)! Besides, there are always VPN’s, and if the trolls are as willing as they seem to be, they’ll find ways to get around any sort of IP ban. There are simply too many points of fault for this solution to adequately prevent harassers from utilizing the platform post-ban without also restricting access to other users who aren’t in any way associated with the incident. To go another step further down the rabbit hole, iCloud/Google Play account synchronization is simply too invasive to be used in the name of curbing harassment. Even if implemented, the app-account solution wouldn’t account for people who use Twitter in mobile browser (unless all device MAC addresses were passed through to Twitter, which would just be absurdly invasive).
  2. Twitter account/tweet rating system — The idea of having benevolent users upvoting each others’ content sounds great in theory, but what prevents trolls/abusers from upvoting each other in the name of brigading such a system, or downvoting content/people that they knew their victim would want to see? Or, more directly, simply downvoting their victims’ content, thereby silencing and oppressing the victim further? Adamant harassers/trolls will game a system unless there is a way to uniquely identify them to prevent them from creating new accounts, or blocking content that fits a certain criteria. In its current iteration, Twitter allows for blocking of any users with two clicks. This, of course, becomes a scaling issue for users who have hordes of people harassing them, where individually navigating to block each and every one of them becomes unreasonable. Viewing/filtering content that only passes a certain love/retweet threshold seems a tad controlling and presents a modicum of other (mainly negative) externalities to justify using. Twitter already has liking/retweets as a mechanism for increasing content visibility of content that its communities see fit. Besides, one of Twitter’s unique strengths is its raw feed, where Facebook and Instagram have both gone the way of algorithmic sorting, to much bemoaning of the latter’s respective communities. Despite how easy it is to block users, this solution doesn’t necessarily scale well when victims are being harassed by hundreds or thousands of people — nor does it actually prevent the person from experiencing the harassment, it simply prevents that one harasser’s single account from being used in the future.
  3. Connecting with a rep at Twitter to report harassment is something that could potentially ameliorate the impacts of the harassment but isn’t a true solution to the existence of trolls and harassment on the platform, for two reasons. First, it’s not scalable, resource-wise or temporally. The number of representatives that would be required to staff these calls would quickly balloon. Second, is that it doesn’t in any way inhibit cases of future harassment. Unless these representatives were assigned to moderate someone’s @mentions, or to police bad individuals who are continually harassing, it wouldn’t seem to be effective (and this efficiency is minimized even further by extending the impact of labor intensiveness). If the underlying technical challenges spoken about prior still exist, what exactly is a Twitter representative left to suggest or do for the user? It’s obvious that speaking with a real person offers a baseline level of relief and understanding, but if this can in no way lead to some sort of actionable outcome, then it may not make sense for Twitter to do this as a company. In the end, it’s just a bandaid fix that addresses the symptoms rather than the actual problem.

ACTIONABLE SOLUTIONS:

1. ‘Premium/confirmed/verified’ Twitter accounts

In a world where people can opt-in to link more unique personal info during account signup, it could prevent abusers from remaking accounts while still allowing first time users to have a simple/fast onboarding process. Linking something as simple as a single phone number for 2 factor authentication on signup, where duplicate ‘premium accounts’ with the same phone number aren’t allowed, would go a long way towards combating much of this harassment. As with harassment on most platforms, a small number of people represent a disproportionately large amount of the harassment, both in quantity and severity. People who are feeling harassed could then choose to display tweets directed towards them that are by ‘premium users’ only.

This system was implemented to great success in a video game (Counter Strike: Global Offensive) by software developer Valve, where the game was ridden with hackers who would consistently recreate accounts after their previous one was banned, thereby ruining the experience for players that didn’t use cheats. It worked phenomenally, and the prevalence of cheaters overall hasn’t gone down (much like how the number of people willing to harass others won’t change), but anybody who takes the game seriously plays in the premium, opt-in, queue, where all of the cheaters/trolls are relegated to the alternative queue, drastically improving the experience for those who want to engage critically and seriously on the platform. I’d venture to say that anybody who takes their interactions on Twitter ‘seriously’ would opt-in to something as innocuous as this. Again, for people who aren’t being harassed, they don’t need to enable the premium filter when viewing ‘@usermention’ content on Twitter. A harassed individual enabling the filter wouldn’t prevent others from tweeting at the harassed person, but would alter the content that the user sees to improve their experience.

Twitter already has their verification system for ‘celebrities’, and I think that some sort of similar system could be extended to be able to combat harassment effectively as well.

2. Rallying the community to catalyze social change

Although the most idealistic solution of all, this addresses the fundamental cause of online harassment (people), and therefore has the largest hypothetical potential impact. Being able to identify online harassment is a critical first step towards minimizing it. By calling out cases of online harassment, users can help reporting systems more effectively remove toxic content. By being more cognizant of what one says online, it could cut down on unintended harassment by individuals without malicious intentions. In the age of the “fake news” dilemma where people are far and away unable to discern fact from fiction, I find setting personal feelings aside to self-incriminate members of their own philosophy to be idealistic at best, and destructively tribalistic at its worst.

3. Machine learning harassment-buster (internal or external)

Machine learning models trained to help flag content that is most likely to be harassment. If it’s confident enough, it should have the power to delete it. This would include manually tagging posts that are confirmed to be ‘harassment’ (or using the last tweet(s) that were directed at a person that consequently blocked them). There are, without question, tweets that require no context to deem unallowable for Twitter. Although Twitter may have something for this already working under the hood that I’m not familiar with, I do believe that there is much more work to be done in regards to expanding these (potentially existing) models.

There is always going to be a gray line, but I think individual harass-tweet identification still has a lot of potential before moving on towards more sensitive models. This ROC (Receiver Operating Characteristic) curve on the left demonstrates this concept, where we can attain a fairly high true-positive rate for identification of many harassing posts, with an acceptably low false-positive rate, cutting off at a certain conservative threshold where either the tweets that are filtered in the intial model are then reviewed further, are sent to another model for further evaluation, or are simply left alone for now, as the priority is to remove the most heinous and obvious of offenders.

The subjective aspect of this, however, is in deciding what content should and shouldn’t be allowed on Twitter. Should this be determined by the users through a community-policing modality? How does the multinodal community that is constantly at odds with one another supposed to come up with some equally agreed-upon standard? Ultimately, this falls under the jurisdiction of Twitter to decide what it’s goal for the platform is.

4. Shadow Banning / Shadow Blocking

The easiest solution of them all — don’t show a user when they’ve been blocked by a particular user. This is somewhat inspired by Reddit’s shadowban system, where users are allowed to persistently post comments or threads as usual, but the platform will not display them to other users. If harassers aren’t deterred by making a new account, then shadow banning will, at the very least — delay, or ideally — prevent, a harasser’s knowledge of any sort of silencing, thereby helping to protect the victim.

Closing, and food for thought:

If trolls are something that exist unilaterally across other online media platforms similar to Twitter, is Twitter disproportionately burdened to solve this complex, ever-evolving issue? As an addendum to the prior question, should Twitter forcibly create a harassment-free platform without being under the force of existing market pressures? Despite being under seemingly constant financial pressure to continue growing, can Twitter successfully continue to grow by marketing more effectively to new users or solving the issues of those who have left, or are leaving, the platform? I don’t think reducing harassment in any form is necessarily a partisan issue, but it cannot be emphasized enough that solutions solving for online harassment straddle the boundary of limiting speech and content on the web in general, which open another huge can of worms — something Twitter has taken a staunch stance against in the past, with very rare exception.

I truly believe there is a future for a higher quality Twitter, one without mob-like, riotous verbal assault. As a platform built on the idea of free speech, an approach should be taken that works from absolute freedom and works backwards towards finding specific restrictions, rather than issue generalist content bans/blocks, leaving us to sort out which content should become unbanned. Twitter’s currently got their hands pretty full with their financial situation, but as they’ve recently removed notifications when a user is added to lists, harassment reduction is clearly something that’s on their radar.

Is it working?

Maybe not yet — but I‘d like to believe there’s a light at the end of the tunnel.

If you liked this piece, I’d also recommend the following, which delves a little deeper into the dichotomy of free speech and tolerance.

Hey, I’m Nick Walsh.

I help empower student developers with Wolfram Research and Major League Hacking, and write technical content for MongoDB. I write to stay sane in my world of 1's & 0's.

--

--

Nicholas Walsh

Developer Relations at Amazon Web Services. Formerly MLH, Datmo, Wolfram Research.❤️ esports, AI/ML, and dunkin donuts coffee.