Designing Products for Abuse

Sarai Rosenberg
Dec 30, 2016 · 10 min read

Anecdotes of harassment, abuse, and misuse abound, from Twitter to online dating. Products must be designed, from the very beginning, with potential abuse in mind. Fixing the problem starts from asking the right questions (and more importantly — hiring and supporting people who ask these questions). What questions should we ask in designing a product to reduce the potential for abuse?

Image for post
Image for post

Some forms of abuse:

  1. Abusing product terms: primarily fraud, e.g., to receive unlimited free trials, or to use features in your free licenses to produce effectively the features only available in your paid licenses.
  2. Harassing other users: direct harassment through messages (or other forms of interaction), interfering with their use of the product, or breaching user privacy.

Some essential product features:

  1. A mechanism for reporting harassment or other forms of abuse.
  2. Policies and community standards.
  3. Metrics for tracking abuse.
  4. User privacy and security.
  5. Misuse cases.

There’s no replacement for extensive internal and user testing, and some of that should include attempts to abuse and misuse the product, preferably including users who have experienced harassment. Fuzz testing can help detect coding errors or security vulnerabilities, but only mindful and creative human user testing can help you design a product to handle abuse and misuse, in coordination with thinking through some of the questions featured below.

Image for post
Image for post

Blocking or muting users:

For example, currently on Twitter, after Alex blocks Kelly, any quote retweets by Kelly of Alex will remove the original tweet from being shown, but the web app still shows a link to Alex’s tweet. Replies will still list Alex’s handle, and searches for Alex’s handle will reveal all harassing replies, retweets, and quote retweets by Kelly. Other users will see the full thread. This implementation of blocking does not effectively prevent dogpiling.

Could blocking or muting be manipulated by users either individually or in a group to abuse your product in another way?

If your product allows moderators to enforce community guidelines, think carefully through the consequences of allowing users to block moderators — and also the consequences of preventing users from blocking moderators.

Blocking is a single tool. Blocking protects a single person from a single user. If a user is harassed by many people, does your system offer tools for them to protect themselves? If a single user harasses many others, does your system detect and respond?

Twitter allows a user to protect their tweets, so that only the users they follow may see their tweets or @ them in tweets. This method of protection limits a user’s speech to protect them from others.

In contrast, Facebook allows users to restrict the privacy of a post (public, friends, friends of friends, or custom), and independently allows users to restrict who may comment on a post (e.g., don’t allow non-friends to comment on a public post).

Instagram released tools to filter comments, to remove followers, and to turn off comments on their posts.

Image for post
Image for post

Reporting harassment or abuse:

How can your users report harassment or abuse? Where do those reports go, and how are they handled? If a user continues to have concerns, can they appeal harassment decisions and report concerns about how the incident was handled? Can a user abuse your reporting system?

Reporting requires your users to take action. How often do users feel the need to report? Do your users think your tools blocking, muting, reporting, and otherwise protecting themselves are effective and sufficient?

Twitter can lock accounts that post harassing tweets. They also improved their reporting system by notifying users that they are looking into the report. Their email informs users that they care, and informs users of other tools they can use. They later notify users of their decision. However, Twitter users continue to find Twitter’s response to harassment insufficient and their tools ineffective.

Reporting empowers your users to moderate their community, and nurtures a symbiotic relationship in which your users can help you combat spam, harassment, and abuse in general. However, if negative content overwhelms positive content, users will stop reporting and stop using your product. Again: reporting is a single tool, and is not sufficient alone.

Image for post
Image for post

Policies and community standards

Policies serve two purposes: setting community standards, and establishing an intent to hold users accountable for their behavior.

Does your product encourage positive user interaction? Does your product involve users in improving their community?

Riot Games took successful action to reduce harassment in League of Legends:

“The Riot team devised 24 in-game messages or tips, including some that encourage good behaviour — such as “Players perform better if you give them constructive feedback after a mistake” — and some that discourage bad behaviour: “Teammates perform worse if you harass them after a mistake”.

The warning about harassment leading to poor performance reduced negative attitudes by 8.3%, verbal abuse by 6.2% and offensive language by 11% compared with controls. But the tip had a strong influence only when presented in red, a colour commonly associated with error avoidance in Western cultures. A positive message [in blue] about players’ cooperation reduced offensive language by 6.2%, and had smaller benefits in other categories.”
Can a video game company tame toxic behaviour?

Riot Games found success in reducing bad behavior. By explaining to users within 5–10 minutes what behavior led to a ban with an automated message including the specific chat log, 3-month recidivism rates decreased to less than 8%.

Free speech is a common goal of many products — but whose speech is free and unrestricted? If free speech is your goal, do your users feel their speech is more restricted by enforcement of your community standards, or by self-censoring to protect themselves from harassment?

I, for one, censor my speech online to protect myself from harassment, based on my experiences of being harassed. My use of Twitter, especially, is both limited and anonymous because I don’t feel safe to express myself freely on Twitter.

Image for post
Image for post

Metrics for tracking abuse:

Identify if your users are human — maybe using or inspired by Google’s “no CAPTCHA reCAPTCHA”.

If your product involves a messaging system, consider tracking how many times a user sends an initial message to another user that does not receive a reply. How many times has that user been blocked after sending a message, without receiving a reply? How many other users have blocked or reported that user?

Does your system automate identification of harassment or abuse cases? How does your system handle false positives? How do false negatives impact your users?

Facebook has been repeatedly called out for censoring and banning marginalized users (potentially false positives?). Users are banned for “sexual” photos (e.g., breastfeeding) or slurs (e.g., users describing the harassment they experience), while the users committing the harassment continue their behavior (false negatives and recidivism).

Twitch released “AutoMod” to empower broadcasters to adjust the degree of filtering, ban links, restrict custom words or emotes, or appoint users to help moderate chat. Broadcasters have the option of requiring moderator approval before AutoMod-flagged messages are posted to the chat.

Survey your users. Do they think that your system’s tools are adequate for protecting them from harassment and abuse? Do your users feel safe?

Account for survivorship bias in evaluating your survey results: assess whether you value the opinions of users who don’t respond, or users who have stopped using your product.

How are your users using your product? What features are they using, and how are they using them? If your licenses restrict use of your product, are users finding ways to use free licenses to achieve use cases that you intended for paid licenses?

Using multiple accounts to evade quantitative restrictions (and banning!) is common. Tracking email addresses can be ineffective with virtually unlimited free email services. Tracking IP addresses can be ineffective with users who move or use proxies — and can interfere with intended use of the product if a new user logs in from an IP address previously used by someone who abused your product.

Can metrics address the issues described above? Yes and no. Metrics can’t solve every problem, but metrics can help you discover how your users are interacting with your product and how they are using your product. If you have both free and paid licenses, identifying how users find value in your free licenses can help you convert free licenses into paid licenses. If users are discovering unintended uses of your product, you may find a new revenue source, a new marketing source, or an avenue of abuse to restrict.

Image for post
Image for post

User privacy and security:

On kindness: do you need the answer to the question you are asking?

“How many children do you have? might sound like the simplest question, until it brings a grieving parent to their knees.”
Personal Histories, by Sara Wachter-Boettcher

Can a user gather sufficient information to dox another user? Do users have granular control over what information they share with other users? Do users have granular control over what access other users have to interact with them?

GitHub’s “consent and intent” changes require user approval to be tagged by another user. In contrast, Twitter does not require user approval to add a user to a public list. A benefit to this design is that users can easily create lists. A drawback to this design is that hate groups can curate lists for targeted harassment. I recently discovered that my Twitter account was on several antisemitic and anti-feminist lists. After blocking those users, the rate of harassment I experienced dropped dramatically.

Facebook similarly offers granular control. In particular, Facebook allows users to control who can tag them in posts, whether tagging is automatic or by approval. As a queer woman, I leverage Facebook permissions to restrict which users may post on my Timeline, and who can see what others post on my Timeline. I have not always felt safe to be openly gay, and these controls have been essential for my safety.

Will anyone know if a user gains privileged access to your system? If Bob gains access to Alex’s account, will Alex know? Will Bob be challenged before he can do significant damage to Alex’s account? Is two-factor authentication available to users, and do you encourage users to use it?

In 2016, users walk away from unlocked devices with active, logged-in accounts. Some products will list all active logins, all recent logins, and an option to log out of all active sessions, or end specific sessions. Many products require passing an additional password or two-factor authentication challenge before changing key user info or deleting an account — just in case someone sat down at your active session.

When a user changes a password on Facebook, an alert recommends that the user click to end all other active sessions. If the user proceeds, the system checks if any changes have been made to the account recently (e.g., profile changes, email address) and offers the user an opportunity to undo recent activity.

Image for post
Image for post

Misuse cases

Misuse cases form the foundation upon which you can design your product for abuse.

What else?

Image for post
Image for post

About the author

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store