4 Things We Learned from Talking to People who Face Harassment: Research behind Squadbox

Amy X Zhang
Squadbox
Published in
10 min readApr 17, 2018

Next week, at the ACM International Conference on Human Factors in Computing Systems (CHI 2018), the premiere venue for human-computer interaction research, we will be presenting our research conducted at MIT on online harassment. This research led to the design and development of Squadbox. You can read the full research paper here.

We’re a team of researchers based out of MIT CSAIL working on tools to help people combat online harassment. But before we got around to building these tools, we first sought to better understand the struggles that online harassment recipients face, as none of us had faced online harassment before (luckily). We wanted to answer questions like: how do people get harassed online and how does it affect their lives? What strategies do people already use to combat harassment, and how effective are they?

To learn this, we conducted a series of interviews with 18 people who have faced online harassment. We focused on harassment that is posted to an individual as opposed to harassment about an individual that is posted elsewhere (such as revenge porn). Interviewees came from a wide array of roles, from activist to journalist to scientist, and have faced harassment on a variety of platforms, as you can see in the table taken from our research paper below. Some people were harassed by hordes of strangers, while others were targeted by an individual or small number of people, often people they knew.

All the people we interviewed for our research study. We talked to people who were harassed for their public facing work, including journalists, scientists, activists, and Youtube personalities. We also talked to people who were harassed by people with whom they had developed personal or professional relationships, including from ex-partners, former collaborators, and fans.

Here are some of the things we learned:

1) People have very different definitions for and experiences with online harassment.

While there were some similarities that cropped up in how our interviewees described online harassment, there were also many different and unexpected cases that we discovered while talking to interviewees.

In terms of message content, many subjects described harassment as a personal attack, sometimes about aspects of their identity, that was designed to be emotionally upsetting. Some of these would be clear to almost anyone, such as well-known slurs or swear words. However, some people felt that other forms of content that were not explicit attacks were still harassing. For instance, one interviewee spoke of receiving deeply personal or graphic confessions or disturbing solicitations sent to their work email, which they considered a boundary violation. Other interviewees spoke of attacks that were coded and may not be clear to people unfamiliar with their identity or their community.

Also, not all messages deemed harassing were harassing because of the message content. For instance, interviewees described receiving messages that seem innocuous but were designed to suck time (sometimes called “sealioning”) or keep in contact with the interviewee due to obsessive interest (exhibiting stalking behavior). These were often exacerbated by the strategies mentioned below.

One interviewee described how their ex-partner would specifically send more messages designed to disturb them when they had an important work meeting scheduled, an example of how time can be weaponized.

Other attributes that define harassment beyond message content include a high volume of messages from a large number of people (sometimes called a “dogpile”) that were directed by some central source, such as blog post. Not every message sent would have harassing message content but the intended effect on the receiver is to overwhelm them, much like a distributed denial of service (DDoS) attack.

Interviewees also described harassment as when individuals would make repeated, persistent attempts at contact despite being ignored or asked to stop. One interviewee highlighted the persistent nature of several of their harassers:

“If I ignore their message, they’ll send one every week thinking I’m eventually going to reply, or they will reply to every single one of my tweets”

Besides finding new avenues to harass someone, harassers can also persist through obfuscation. One interviewee had a harasser who continually pretended to be a new person with a different email handle and would draw the interviewee into conversation, before revealing they were the same person as before. Similarly, one of our interviewees had a harasser who sent spoofed messages pretending to be their friends, making it so the interviewee became unable to distinguish between legitimate messages from friends and spoofed messages.

By reviewing some of our interviewees’ harassing emails, we noticed some creative techniques used by harassers designed specifically for certain mediums, such as email. For instance, some harassing emails had legitimate-sounding subject lines, and the harassment would be buried in a line partway or towards the end of an email, so the reader would have to open it and read to find out. Other emails went the other direction, adding harassing content even to the sender email address, through the use of throwaway email accounts.

While we found a variety of experiences with and definitions for harassment, there are undoubtedly more. What this initial foray told us was that any one-size-fits-all solution to harassment would likely fail to serve many people. Instead, much of harassment is contextual and require understanding of that context to recognize it. For example, these cases would not be something that a moderation team working in a different country or on short-term contract using a single set of guidelines would be able to cover.

It also demonstrated how difficult it would be to develop a purely computational approach to detecting harassment that would cover all these cases, especially as many harassment detection models today are only trained on message contents and context is not taken into account.

2) Encountering harassment during one’s day-to-day is a disturbing experience for many harassment recipients.

Almost all of our interviewees expressed frustration at their lack of agency to decide whether or when to confront harassing messages. One person said:

“Getting a [harassing] email when I’m looking for a message from my boss — it’s such a violation. It’s hard to prevent it from reaching me. Even if I wanted to avoid it I can’t. I can’t cut myself off from the internet — I have to do my job.”

One important point this person brought up is that for many people, stepping away from the internet or ignoring one’s messages is not a viable strategy. One might have to check their messages or be available online for their work — in which case, having harassing messages mixed in with other messages may lead to trepidation and stress whenever opening one’s inbox. One interviewee talked about how they were affected by both the mixture of messages in their inbox coupled with notifications that they received for incoming messages:

“The constant negativity really got to me…having it in your mind every 30 minutes or whenever there’s a new message…It just wears me down”

At the same time, interviewees need to see and get notified about their regular mail, especially if this is an account they use for work.

The problem of getting one’s day-to-day disrupted by harassment gets exacerbated when we consider volume and how it can be used to effectively shut down a person’s communication channels, as mentioned above. When they were inundated, many of our interviewees were left unable to respond to fans, their friends and community, or professional contacts:

“It’s made it harder to find the people who genuinely care, because it’s hard for me to motivate myself to look through comments or…go through my emails. Why should I look through hundreds of harassing comments to find a few good ones?”

The attack on their communication channels meant that some missed out on opportunities as a result of harassment. For instance, one of the journalists we talked to missed an interview request amidst a flood of harassing tweets.

One consequence of a DDoS (Distributed Denial of Service) style of harassment is that it can often be bursty — for example following publication of an article or video that gets a lot of attention — and thus many of our interviewees alternated between spikes of heavy harassment volume and periods with little or no harassment. Several of our interviewees also mentioned that oftentimes they could predict when a wave of harassment was likely to come, such as when they were about to publish a piece of content, without much recourse to do anything about it.

What these experiences demonstrated to us was that how the harassing messages arrive and are experienced is important to consider. What users needed was a way to gain control back over their inboxes so that they could go about their lives on their own terms.

3) Platform tools of block, filter, and report are inadequate.

Nearly every subject we interviewed stated that they had blocked accounts on social media or email, though most felt this was not very effective due to the number of harassers and harassers’ ability to circumvent blocking. One interviewee said:

“Every time he makes a new email, he creates a new name as well…Not only new names, but he also pretended to be different people.”

For others, blocking was not an option because they needed to or wanted to gather information from their harassers’ messages. Some who were harassed by ex-partners needed to keep in contact for coordinating childcare or for avoiding each other due to a restraining order. Others scanned their harassing messages so that they could become aware of potential threats, such as doxing of their private information, so they could then alert friends or authorities.

Another reason subjects wanted to see messages from harassers was to get an understanding of dissenting opinions for work purposes. For instance, some journalists we talked to felt that it was important for their job to keep a pulse on reader reactions. Other subjects wanted the ability to track their harassment over time in response to their public activity, such as learning what kinds of content generated the most harassment, in order to tailor their own behavior or respond to the harassment publicly. Still others wanted to track and document the harassment so that they could report it. Finally, some interviewees wanted to do damage control among peers after defamation.

Word or phrase-based filters were also inadequate. Some subjects expressed frustration at the difficulty of coming up with the right words to block or managing changes in language over time. One described filtering out messages despite false positives, saying:

“I have suicide as a filtered word because I get more comments from people telling me to commit suicide than I get from people talking about suicide…If I have the energy to, I’ll go through my ‘held for review’ folder to look through those.”

Finally, nearly every subject had reported harassers to platforms and strongly expressed dissatisfaction with both the process and the platforms’ opaque responses. A common frustration was that the burden of filing a report was too heavy, especially when there were many harassers. In the case of email, there is actually no process for reporting harassment at all on the major email platforms. Beyond platform tools, subjects also tried seeking help from law enforcement; the prevailing sentiment was that this was a time-consuming and fruitless experience.

In the absence of proper platform or legal involvement, users need tools to better manage their communications. However, tools simply targeted at the user to deal with their harassment on their own are insufficient because of how labor-intensive this task can be. Even in cases where platforms are responsive, harassment can be so contextual that there may be many cases that are not covered by generic platform policies.

4) People ask friends for help so they don’t have to face harassment alone or self-censor.

When we asked interviewees what they did in response to harassment that actually worked, some responded that they self-censored in order to give harassers less ammunition with which to harass them. Others made themselves harder to contact by closing Twitter direct messages from people they do not follow, not giving out their email, turning off notifications, or disabling comments. While this helped to mitigate harassment, it also made it more difficult to engage with people they did want to talk to — people they already know as well as non-harassing strangers, like collaborators, fans, clients, or sources:

“It’s impossible to contact me if you don’t have my contact info…I can’t be available to journalists as a source…I used to get all these awesome opportunities and I just can’t get them anymore.”

At the end of the day, these kinds of strategies, while providing relief to many people, are unsatisfactory because they mean that the harassers succeeded in silencing and isolating recipients of their harassment. When taken as a strategy for the internet as a whole, it means the loss of perspectives from vulnerable and targeted groups that often get harassed and consequently a failure to uphold principles of open dialogue and free speech online.

Instead, another mitigation strategy that helped and didn’t require silencing themselves was reaching out to friends or family for support and assistance. We had several interviewees independently describe ways that their friends would help. For instance, one person said that their best friend had their Twitter and Facebook passwords, and would log into their accounts and clear out harassing messages and notifications and block users. Another interviewee similarly said their spouse would log in to their email account and delete harassing messages, and a different interviewee who was an academic had others in their department going through their emails when they were undergoing an attack. One person described how their significant other would go through the comments on their posts and only read aloud the positive and encouraging ones. Multiple subjects said that they would forward potentially harassing emails unopened to friends for them to check and forward back.

Based on this research, we built Squadbox, a tool for people facing harassment to recruit their friends and other trusted individuals to moderate their inbox for them.

Many of the features of Squadbox are based off of findings from our interviews. For instance, we make it easy to turn moderation using Squadbox on and off due to people’s comments about the bursty nature of harassment. Since many interviewees talked about wanting to glean some information from their harassment, we allow them to specify what happens to messages deemed harassing, including receiving them in their inbox with a special tag, getting them summarized or partially redacted, or filed away.

Screenshot of a page in Squadbox. Photo was taken by me, then edited and published by Refinery29.

You can learn more about Squadbox by trying out the tool itself, reading our blog post or MIT’s press release introducing it, or even contributing to the project by looking at the code on Github. You can also read the original research paper here.

--

--

Squadbox
Squadbox

Published in Squadbox

Fight online harassment with the help of friends. www.squadbox.org

Amy X Zhang
Amy X Zhang

Written by Amy X Zhang

HCI researcher @MIT_CSAIL studying & building interfaces for discussion. Formerly @googleresearch @MSFTResearch @Gates_Cambridge, SWE @newscred, tennis @rutgers

No responses yet