Member preview
Loading…
0:00
16:04

The Hard Lessons of Blue Checkmarks

Blue checkmarks are a problem.

On social networks, they’re meant to indicate some bare factual statement, like “we [Twitter, Google, YouTube, Facebook] know who this person is and can confirm that they’re famous.” But it was inevitable that they would end up being interpreted as “this is a special person you should listen to;” that they would be treated as a mark of distinction, not just a mark of some fact. This interpretation didn’t just affect the public; it even affected the companies themselves, as different product teams started using checkmarks to enable features like different notification experiences or abuse pipelines. That’s exactly what you would do if a checkmark was, in fact, a sign of a known good actor; it’s exactly what you wouldn’t do if a checkmark could just as easily mean “this person is a famous Nazi.”

When I was technical lead of Google’s Social efforts (a strangely hybrid engineering/product/everything else role which I held from 2011 to 2015), we ran in to all of the same problems on our own system — although not nearly at the same scale as some other companies recently have. So today I thought I’d share the fruits of some painful experience: why verification was there in the first place, the ways it broke, and what I would do if I had to do it all over again today.

Why verify?

User verification originally showed up to solve the problem of impersonation. On a system where anyone create an account, how do I know if I’m following the real Taylor Swift, or Jack Dorsey, or Donald Trump, or if I’m following a parody account? In the early days of these systems, “parody” was often a euphemism: people would routinely create accounts trying to pass off as a real person, to do anything from stealing eyeballs for spam purposes to actively discrediting the person or trying to ruin their life.

The highest-profile celebrities were, in a way, the easiest case: if someone already has millions of followers, that’s relatively hard to fake. But if a celebrity just joined, or if a previously-unknown person becomes Internet Famous, that doesn’t work. And it also doesn’t work if the impersonation isn’t targeting someone because of their fame at all, but because of a personal grudge.

Verification started out as an attempt to solve the first of these two problems: how to let users know that they were following “the” Michael Jordan or Beyoncé. The idea was that, if a user who has no personal relationship with anyone named “Michael Jordan” is searching for Michael Jordan, there is one particular account they almost certainly are looking for, and it should be easy for them to know they’ve found it.

But what, exactly, is the criterion for this verification? What does it mean to be “the” Michael Jordan? If you limit yourself to only the most famous of the famous, this problem is somewhat simplified by looking at Internet search: this is a name that a lot of people search for, and all of them are looking for the same person. But if you go beyond that very small group, it turns out there are a lot of people named Michael Jordan, who have different levels of fame. Did you mean the Michael Jordan who played #23 for the Chicago Bulls, or the editor of Learning in Graphical Models (Adaptive Computation and Machine Learning)? Or perhaps your neighbor down the block?

If the checkmark is meant to indicate “we have verified that this person is who they say they are,” then all three of them should be equally verifiable; each of them has a driver’s license to prove it. Since so many social networks had policies around “real names” and verifying people’s identities, the question of “well, if you’ve verified that I am who I say I am, why don’t I get this checkmark too?” became perfectly reasonable.

Because the checkmark was highly visible in all these UI’s, and deliberately hard to forge, it became a clear mark of someone’s celebrity and importance. The consequence of the conflation immediately became clear on every single platform: this was seen as a person officially endorsed by the system.

Lessons Learned

So what are some lessons learned from this? What would I do differently if I were designing this system from scratch?

The clearest red flag of all should have been how difficult it was, even internally, to ever articulate exactly what the criteria for verification should be. If you’re giving a highly visible indicator of something you can’t clearly explain even to yourself, you’ve just made something wholly arbitrary have real meaning — and confusion around the boundaries is sure to erupt.

Imagine, instead, splitting up the purposes for which such checkmarks have been used.

Verify facts, not people

The Michael Jordan problem illustrates why trying to verify that someone is “the” something-or-other doesn’t work at scale. The question you’re really trying to answer is: if a person with no personal connection to the person you’re trying to verify were looking for them, how would they know they’d found the right person? But note that the fact that strangers are searching for this person is almost exactly the definition of newsworthiness. (And the more strangers are searching for you, the truer this is) That people are searching for you means there’s some fact about you which is driving them to search, and what they want to know is that you’re the person about whom this is true.

That’s what you want to verify.

So to solve the impersonation problem, I would introduce an idea of “verified facts” about a person.

  • Verified facts need to be very short statements, perhaps 50 characters maximum. They need to be statements of fact, not opinion, and so need to be generated by people at your company, not the individuals themselves. (Although individuals may propose statements to verify)
  • For internationally relevant people, it will be important to localize these statements into many languages. These can be treated as groups of facts, and it again highlights that these are strings managed by the company, not the individuals.
  • A fact should only be verified about a user with that user’s consent. People may have plenty of reasons not to want their account to be public and highly visible. While it’s true that an adverse fact about a person may be newsworthy, since the purpose of verifying these facts is to lead to engagement with that user (via people finding their account), there’s no particular upside to officially stating those. This is not meant to be a substitute for, or a kind of, journalism. Users should opt in to this type of verification.
  • The scope of how visible a user needs to be before facts are verified about them, or of which facts are worth verifying, can be adjusted over time. It’s basically a tradeoff between the work required to verify a thing (which differs for different kinds of fact) and the value of doing so. The fact that one of my friends makes amazing BBQ would be a totally legit thing to verify, and could easily spark great conversation and interaction, but for a social network to try to verify that sort of thing is probably not the best use of effort. (Although it would be a very tasty effort.)
  • The best way to present such facts is a good UX question, and IANAUXD. (I could design user interfaces, in much the same way that I could perform thoracic surgery. It would not be a good idea.) These could have other uses as well, especially in powering user search.

If you’re in the mindset of talking about whether you know a fact to be true, you won’t confuse it with an assertion that a person is something or other.

Measure Good and Bad Behavior Separately

Checkmarks have gotten (incorrectly) used to indicate “known-good” or “known-important” users across the system. Those edge cases where they don’t match end up being very important.

Instead, it’s worth measuring behaviors of users which are positive or negative from the perspective of the community as a whole. There are many different behaviors in this space, and it’s good to avoid the temptation to try to compute a single overall “karma score.” (Stop me if you’ve heard this one: someone is tremendously knowledgeable about their field, a trove of experience which they’re quite willing to share, and also a completely toxic human being that nobody wants to be around.)

These scores are worth measuring for all sorts of reasons. Strong positive behavior is something you want to reward, and identifying kinds of that behavior opens up opportunities to do so. For example:

  • Someone who really actively engages with people while not pissing them off is a tremendous boon to the community. You can measure the positive side of their engagement by looking at how often people respond to them, like what they’re saying, or start having conversations amongst themselves in response to things they’ve said (Those “indirect engagements” are good to measure separately).
  • Celebrities who engage with ordinary people are pure gold from a community perspective. When highly visible users are engaging positively not just with other high-visibility users, but having direct engagements with low-visibility users as well, that’s an extremely pro-social thing and should be marked!

On the negative side:

  • Look at how often people take negative actions (blocking, reporting, etc.) in response to communications directed at them by a particular individual. (You don’t want to count how often a person is blocked or reported overall, since those tell you about other people, not them. But if every third time a person @-mentions someone, that person reports them, you’ve got a pretty good sign that something is wrong in this picture.

You can start to combine these in all sorts of ways. Direct positive interactions means that this person is engaging well with others. Indirect positive interactions means that they’re starting conversations. Direct negative interactions with strangers, especially ones that the person initiates, are a red flag. Indirect negative and positive interactions combined suggests that someone is controversial but engaging (and the right thing to do will require some thinking).

This approach of directly measuring behaviors you want to encourage or discourage gives a lot of leverage in shaping one’s community, and can adapt to all sorts of things. It’s also independent of the question of how you reward things, which may vary a lot.

For example, if your social network has some kinds of “spaces” in it where people can set their own moderation policy and curate their community, that moderation is likely to be hard work. Identifying, and rewarding, the people in a community who are putting lots of effort into making the community a better place is going to be important. But different rewards may have different effects. For example, marking someone a “top moderator” may have some blue checkmark-like problems, because it establishes them as a quasi-authority in that community, and may lead to fights and other tension as they try to assert social dominance. However, this may work well for some kinds of larger communities like fora, where only a few people are likely to do the moderation work. In smaller communities, it may be more helpful for everyone to see who’s doing what share of the work.

In other communities, what you may even need to do is pay people for it. Consider that the people with the most time to moderate are often people who don’t have the same kinds of jobs as other people in their community, which means they’re probably in need of money and would do more of this if they could afford to. This is especially true in marginalized communities. Healthy in-person communities often have people who contribute tremendously to the group’s social cohesion, and who people in the community also take care of in various ways, from regularly inviting them to dinner to helping out with the rent; how can one encourage that same thing in online communities? What would the different effects be on the community if the person were being paid externally (e.g. by the company) for their moderation work?

The point of this isn’t any particular solution: it’s that measuring behavior you care about directly gives you much better tools to decide the same things which are currently often gated on verification, such as who gets early access to various product features.

Apply that to abuse

The application to abuse queues is similar. Some networks have experimented with treating abuse reports by and/or about verified users differently from the bulk. This turns out to work terribly, because there’s no good correlation between the things that get you verified and abuse behavior.

Instead:

  • Look for users whose abuse reports are frequent and systematically confirmed as “yeah, there was something wrong here” by your own abuse teams. These people are literally volunteering to go out and help fix problems! Give them access to special abuse tools, give them a direct line to your abuse team, prioritize their reports.
  • Recognize that the rate of users being flagged for abuse is proportional to their visibility. Don’t let a high flag rate automatically turn into automatic punishment, even implicitly by letting all the reports get checked separately so that ultimately one of them triggers something. Instead, identify reports about high-visibility posts, users, and events, group them, and automatically escalate them to more experienced reviewers.
  • Rather than using verification flags to “protect” accounts, check for how visible an action (e.g. taking down content) would be, and the more visible it would be, the more experienced eyes you want on the problem. Sufficiently visible things turn into policy decisions in their own right; e.g., if Twitter were considering taking down Donald Trump’s account, that would be require serious discussion by senior leadership up to and including the Board of Directors.
  • A separate “escalate all abuse issues concerning this account” bit is actually super-useful, especially when you know an account has systematically become the center of some major Internet hate wave. You don’t need to display this bit in the UI anywhere; just note that any actions you take on this account will have complex repercussions, and at least one person at the company needs to understand the situation well enough to make calls about it.

And as always, make sure you can articulate your principles clearly.