Designers Can Reduce Online Harassment

Help communities help themselves.

The Conference
The Conference

--

In August 2015, Eleanor Saitta, Security Architect and Co-Founder of the Trike Project, talked about building systems to reduce online harassment at The Conference in Malmö, Sweden. This is the talk in writing, thank you Opentranscript.org for the transcript.

So, what I’m talking about here is not what we need to do culturally or politically, it’s not the roots of online harassment. It’s the design tools that we can use to shape the environments that people interact in to reduce the impact.

This is mitigation. It’s not going to solve anything. That’s a larger cultural change. That’s a bigger question, but having technical tools that let us actually help people’s lives makes the tools that we’re building more useful. The freedom from abuse is a core part of efficacy for a platform. If you can’t use a platform without getting massively attacked when you go on it, then it’s not actually very useful for you, now is it?

So if you are a design team, you have a responsibility, in the same way that you have a responsibility to make sure that you’re not shipping broken features. You can think of it as a bug report that says, “50% of the time when I click on this button, the database connection breaks. 50% of the time when I click on this button, there’s a massive outpouring of hate.” This is a bug that needs to be fixed, and it can be fixed at the design level, or at least mitigated.

When we talk about reducing the impact of online harassment, a lot of what we’re talking about is helping participants help themselves. If you are in a situation where you’ve got a limited forum and you can afford heavily-engaged moderators — MetaFilter is a great example of this here. It’s an amazingly well-moderated community and always has been. However, this has taken probably hundreds of thousands of professional hours over the life of the site with some incredibly skilled, empathic moderators. That doesn’t scale, and it also isn’t really appropriate if you look at something like Twitter, you can’t have that same kind of moderation on Twitter. That’s fundamentally a different structure. So instead we look at tools to give people agency and control over their own environment.

These things like access control lists, being able to say who gets access to my content and do so in a flexible way. There’s a problem here where you trade complexity for capability. But for instance, on Facebook you can set up private lists of people and control “okay these posts go here, and these posts go to everyone, and these posts are visible only to this really small set.” The problem is that they keep pushing people to be more and more public, and we’ll get back to this a little bit later. But if you’re going to build tools like access control lists, if you’re going to give people these promises that they get to control these sorts of things, you need to do so in a way that’s actually usable by them and you need to respect people’s decisions there.

Access control lists are kind of the most blunt tool, though. There’s a lot of more subtle stuff we can do, for instance giving flexible tools, say on Twitter, I don’t want to see @-replies from accounts that are less than a month old. Having that button now all of a sudden means all of your sockpuppet accounts that are created quickly are much less useful. And you’re giving people that kind of flexible filtering.

Any of these kinds of tools. Bayesian filters that we use to train for spam can be trained on anything. If you give them examples of a type of message, they will learn to recognize that type of message over time. You can do this on abusive messages just as well. The filters aren’t as well-tuned for that. There are a lot of hacks that have been added to Bayesian filters over the years to make them better and more effective specifically at spam, but there’s no reason why we can’t build that same sort of smart tool that says, “Well I get a lot of @-replies, especially if you’re a public figure. Show me the ones that look relevant, show me the ones that look like they’re not spam, but let me train it.” And again this is one of the things, any time you can give the user agency instead of taking it on your site, you’re going to end up with empowered users instead of a site that’s controlling what they see.

Let people monitor abusers.

For instance on Facebook when you block someone, that’s it, you can’t see anything they post. If you have someone who’s been an ongoing stalker, who has been abusive in the past, who has been making threats, maybe you actually need to be able to as a user see what they’re posting, even if you don’t want to see it and you don’t want to interact with them. That’s a very important safety consideration for people in a lot of situations. So giving that kind of trade-off and maintaining user agency.

Privacy tools like Tor are another really important thing here. If you are being seriously stalked, one of the recommendations is use Tor, maintain your anonymity, don’t give people your home IP address because that can often be linked to an address, especially if you’ve had to leave your apartment and go somewhere else. You then don’t want to leak that out again. This means dealing with anonymous accounts, and this means shifting the landscape of abuse again. Anonymity doesn’t generate abuse in and of itself. But what you can do if you want to discourage people from using quick throwaway identities (which is one of the dynamics that does cause problems) is you make account creation more heavyweight.

Anonymity doesn’t generate abuse in and of itself.

So instead of it being “I’m going to generate a hundred different accounts and when each of them gets blocked serially…” (Someone has to block a hundred accounts.) if I can restrict that abuser to a single, or two or three anonymous accounts, it’s a massive impact on the ease of the victim’s ability to use the tools that they’ve been given. Tor isn’t the problem, anonymity isn’t the problem, lightweight accounts that are throwaway can be a problem.

Do less more often

One of the other things you want to look at is doing less but doing it more often. It’s one thing when you’re in a forum that has a publisher and a specific moderation policy and that kind of thing. If you have a general-purpose social media site, you need to tread lightly around abuse. There is a real public value in keeping that content as open as possible, and you’re going to have very different communities there, some of which may have political opinions significantly different from the development team.

If you default to simply taking down content, blocking accounts, you have to have a very high bar for how bad it has to get before you can use those tools. Instead, say for instance you’ve got different access control lists. You’ve got a “only people people who subscribe to you specifically will see this” mode, when you get a complaint on content, you just drop it in that bucket. Or you have to click through to see it, or any of these kinds of things where you haven’t deleted the content entirely. Then that means you can have a much lower bar for that interaction because you’re not completely preventing people from seeing it, you’re just shaping the conversation.

One of the things which is really important here is as soon as you have tools for preventing abuse and preventing harassment, they’re also going to be used as weapons against the victims. So if you have “report this account for spam” guess what, you’re going to get 10,000 spam reports because somebody wants to knock somebody offline on an account that’s totally non-abusive. So it’s important to have transparent and clear processes, and it’s important to understand and carefully design those tools to minimize the harm that they can do. For instance, Instagram is often very aggressive about blocking accounts and taking down content if they think there’s nudity in it because they’ve decided that’s not acceptable for adults. (It’s their Internet, we just live there.) One of the problems is that when they do that, if they decide that your account is bad, all the data’s gone, and getting that restored is non-trivial even when they can do it.

As soon as you have tools for preventing abuse and preventing harassment, they’re also going to be used as weapons against the victims.

So instead you want to understand, “Hey, we’re going to get false reports, or we’re going to get questionable reports. Maybe let’s not irrevocably delete massive amounts of user data right away.” So it’s just kind of minimizing the damage that you’re doing so you can do it more easily. This then means that you can make the victims jump through fewer hoops. You don’t have to have the really heavyweight process.

Help communities help themselves

The next thing is help communities help themselves. Abuse occurs in communities. There may be one victim, like Anita, who’s kind of the person who’s getting everything thrown at them, but she exists in a community. She has friends. People in communities that are receiving abuse can take different roles. For instance, you may have someone who has the time to compile, “Okay, I’m going to spend every morning and go through checking these are all the new accounts,” and then distribute a collective block list. There are tools that you can use that let people work together. And it is very different for people to be working together and building those tools for their community, rather than the site doing that for everyone. These have very different implications. They have very different legal implications, among other things. But building tools that let communities help each other is an incredibly important tactic in effectively resisting abuse at scale.

Letting them do their own moderation also reduces the moderator load and the moderator cost for the site, and this kind of engineering trade-off matters for scalability. When you design these use case for abuse prevention, you have to treat them as seriously as all of the other use cases that you build. For instance, Twitter now has something which was sort of a vague gesture at allowing people to have more control of their block lists. You can manually export a CSV file of blocked users, and then manually import it again. This is not very useful. They also have lists that you can just click on and subscribe to, but that’s for getting more content. Having a block list that you could just click on and subscribe to and say, “Yes, I trust this user. I want to delegate that,” well, that would be too easy. But that’s the kind of ease that you need if you’re going to make these kind of tools for communities actually functional.

One of the places where abuse occurs more specifically is when you get what’s called context collapse. When you have a gaming community that has one set of norms and a feminist community that has another set of norms, one of the abuse structures, one of the places where abuse will be generated, is they both think they’re in their living room and then there’s all these weirdos in it, and they have very different cultures. So context collapse is one of the things that makes these systems very useful to us, but it’s also one of the things that drives abuse. So letting communities mark their borders, letting communities enforce their borders, building tools that let there be a, “Hey, you know, here we play by these rules” kind of standard set up will make a big difference for the level of abuse that’s generated.

This isn’t quite enforcing a filter bubble. People can still choose to go and walk into somebody else’s living room, but there’s a marker that says, “Hey, I walked through a door. I’m somewhere else now.” and that kind of structure makes a big difference for shaping community.

Stop getting rich off abuse

Lastly, stop getting rich off abuse. Abuse leads to engagement because real harm is happening and people have to spend more time on the site or with the app to deal with and fight off that abuse, which translates to ad views and money and revenue, and this is one of the reasons why a lot of sites are so bad at designing for abuse: because they’re making money off of it. When you design for engagement, for instance the more time you spend on Facebook the more content it shows you, that is specifically rewarding engagement but that can also be rewarding abuse. So looking at the algorithms that you use to shape participation also has a real impact on the algorithms that you use to shape abuse.

Over-collection of data just helps stalking and doxxing. If you collect a bunch of information from your users, you’re also creating a big target to be hacked and then to see that data being used to harm your users. So don’t gather information that you don’t need to gather. Ad networks are being actively used to spread malware, sometimes targeted malware. If you can run without ads, run without ads, because ads are one of the most evil things on the Internet right now as far as real security risks.

Don’t gather information that you don’t need to gather.

And lastly, kill your VCs. All of the VC funding and the structures that they enforce around continual rapid growth are the things that are driving people towards these kinds of evil tactics or ignoring or kind of papering over the damage that happens. If you can build an investment model that doesn’t require you to abusively drive growth, then you’re going to have less abuse on your platform, too. These things are kind of inseparable.

Hopefully some of these were useful. Thank you very much.

Further Reference

The full talk is also available as video in the video archive among 250+ other talks from 5 years of The Conference.

And once again. The transcript was originally published at opentranscripts.org. Thank you ❤

--

--

The Conference
The Conference

Scandinavia's most important conference. 1500 participants discuss human behavior, new technologies & how to make things happen. Organized by @mediaev #theconf