Crowdsourcing isn’t broken

Harper
Backchannel
Published in
7 min readFeb 9, 2015

--

In reading the fascinating article by @meharris about the DARPA challenge for reassembling shredded strips of documents, I was struck by how the article concluded by assuming that the failure of the team who used crowdsourcing was somehow the fault of crowdsourcing. This seems too easy a conclusion. In the description of their software it seems that they had an unsophisticated approach to a sophisticated problem: online community.

Crowdsourcing is not about work. Crowdsourcing is about community. Without a solid community, you get not-solid results from your crowdsourcing endeavor.

The article outlined a bunch of reasons why this could have happened: The teammates got a late start, they were attacked, they were frustrated and burned out. All of these things can create a terrible experience for the community — which in return will not participate in a positive way.

I obviously have no idea what they went through or what their software actually did, but some of the Twitter conversation that surrounded this article reminded me that we dealt with these issues a BUNCH in the 2000s. We had great crowdsourced communities—in fact, threadless.com, the e-commerce site where I was CTO from 2005 to 2009, apparently invented crowdsourcing (eep!). We dealt with attacks constantly, both on the actual crowdsourcing aspect of Threadless (a tyrannosaurus with a monocle!) and directly on the Threadless community. Trolls, spam, community attacks, we had it all. It did not, however, disrupt the business, and it didn’t destroy our ability to source amazing content from our amazing community.

So. After thinking about this for a second or two and being pushed by @kragen, I decided to write up some of the things that we did back in the day to solve these problems.

YMMV ;)

Ok.

Techniques to defeat annoying people
in your community

These tactics were in use circa 2009 at places such as threadless, yayhooray and flickr. Many applications are likely still using some form of these tactics today, though I am also sure more effective approaches now exist.

The goal of many of these tactics is not to stop assholes from being assholes, just to slow them down and demotivate them from destroying your community.

Most of this is taken from my own experience or from conversations I have had with many people over the years about how to stop bad actors. Amazing people like @kellan, @blaine, @rabble, @skaw, @dylanr, and @hchamp actually invented these techniques. These folks have been leaders in this space for more than 10 years. I would pay attention to them and how they are building communities today.

A Couple of Things:

  • You must be deliberate about solving these problems
  • The best path is always to tell your “bad” actor that being bad is not acceptable behavior. A simple “don’t be a jerk” is surprisingly effective
  • You need to make sure that creating a new account is the most expensive option for the bad actor
  • You should track a lot of data about every user (IP, useragent goodies, etc.)
  • Automated or semi-automated behavior has a high false positive rate. There is a high cost for false positives
  • Harsh actions, such as banning or freezing an account, should be your last resort
  • Enabling your community to help you is your best bet
  • And finally, pissed off people have infinitely more energy

One thing @kellan has always said to me is, “You need to sap the bad actor’s will to live as a bad actor.” Everything should make their life annoying without pissing them off. Here’s how you go about it.

Techniques!

NIPSA

NIPSA stands for “not in public site|search areas.” This means you are purposefully limiting access to specific areas of your app. For instance, with flickr you’d limit access to the homepage, explore, search and so on.

You should tell your users about this measure. Make them earn greater access. For instance, you could say, “all new users are on probation and only have access to X features until they do Y.” They might, for example, need to upload a certain amount of content, get manually approved, have a real telephone number (for sms authorization), connect a facebook account that is some number of years old, or perform some other specific actions.

By making sure people know this step exists, they are less likely to create 1000 accounts to cause a problem.

The goal here is to stop new users from being found by other users. These steps also prevent other users from being adversely affected by new accounts/bad actors.

Flagging

Allowing a community to moderate bad actors in some way is really helpful. It will allow you to jump-start the process of deciding a bad actor is, in fact, bad.

We see these words all over the internet:

  • Mark as spam
  • Flag this content

All they need to do is start the process of marking someone as annoying. Once marked as such, you can do all sorts of other things.

Service Slowdown

I’m not sure how effective this technique was, but I always liked it. Basically you make sure your app performs poorly once a person is marked as annoying. Let’s say you know who a troll is. You flag the account as annoying, and from then on the site performs worse than before.

For a news or discussion site, for example, new content would take longer to show up. You could also slow down the actual http connection, so the entire site is slower. “Saving” things could take a lot longer and sometimes error out. You could do the same with logging in and logging out.

One warning here is that this technique is “clever,” and being clever never works. Bad actors may notice this is happening only to their one account and get mad. An angry person has way more energy.

Hide the bad actor from the population

I always had a lot of fun with this one. It can, however, just make the bad actors mad and give them something to defeat.

The basic idea is that when a person is marked as annoying, the general community no longer see his or her posts or other contributions. The bad actor continues to post and thinks that others are seeing their posts, when in fact they are hidden from everyone else.

Theoretically, the bad actor is trolling away and having a grand old time. The community doesn’t notice because the content that the bad actor is authoring is only visible to its creator.

WARNING: A bad actor can get around it pretty easily, simply by creating a new account.

Rate limiting

Simply rate-limit what a person is able to do. Slowing down communication between a bad actor and a site can solve some basic problems with automated attacks and repeated attacks. You could have a standard rate limit and then once a person is marked as annoying, you could drop their ability to make requests.

Contagion

You want to be able to easily and quickly understand the footprint of a bad actor. Specifically, you want to know how many other accounts they have and which accounts they are using to attack your application.

This could be accomplished with a tweak as simple as dropping a cookie once a user is marked as annoying; if any other account logs in from a browser with that cookie, it also gets marked as annoying.

This allows the contagion of annoyance to be spread among all the accounts owned by the original annoyer. Sure, people could clear their cookies, but people are careless.

A mix of these

Obviously the goal here is to mix and match these techniques with other things you have in your arsenal.

With the proper mix, you should be able to handle most, if not all, simple troll patterns. You may not be able to defeat a sophisticated attack, but at least the unsophisticated attacks will slow or stop.

Finally

Not all of these approaches will work for you. In fact, none of them may work. However, many of these techniques and derivatives of them are helping communities all over the internet be better, safer and more productive places.

I would love to hear how you are solving these problems and if these ideas are helpful.

follow me on twitter: @harper or email me: harper@nata2.org

Unlisted

--

--