Birdwatch and Facebook’s Oversight Board: trying to paper over a system that’s toxic by design

They are doomed conceptual band-aids. Let’s unpack why. Plus: the flaw they share with Higher Education

Mario Vasilescu
The Content Zeitgeist
8 min readFeb 7, 2021

--

First, a 30 second primer.

Twitter’s Birdwatch and Facebook’s Oversight Board

Twitter and Facebook (and every other online platform) are grappling with how to handle moderation as the activity on their platforms grows increasingly out of control, with real world ramifications. Birdwatch is Twitter’s answer, Facebook’s is their Oversight Board.

People have been likening Twitter’s Birdwatch to Wikipedia’s approach: it relies on community moderation. However it’s actually quite different: all Twitter is enabling is adding notes/warnings to others’ Twitter posts from members of the Birdwatch community. The visibility of those notes would be based on how helpful they are, as voted by others.

Facebook’s Oversight Board is much more straightforward: a group of experts, hand-picked by Facebook, act as a sort of Supreme Court that makes judgements on Facebook’s moderation actions, as needed.

Okay, you’re caught up. Here’s why neither will solve anything.

The problems

1- This is only necessary because the system is designed to be such a freewheeling, effortlessly gamed, cesspool of noise to begin with.

Let’s not kid ourselves. Both of these solutions are damage control, trying to cover for the core design that an ad-driven business model demands.

These solutions are actually tacit admission that they won’t fix the root problems that their user experience facilitates, because it would threaten their businesses. Think about it: isn’t it odd that neither solution says anything about changing the core user experience to try solving the problem of toxic behaviour?

Facebook and Twitter are trying to paper over the issue at the edges. Like somebody desperately trying to scoop water out of a flooding boat, instead of plugging the holes. (I expand on the inherently toxic user experience a little further below.)

Applauding these efforts is within the realm of Stockholm Syndrome. Thinking these are the solutions is just allowing the core business model to continue unchanged. That shouldn’t be acceptable, or so quickly overlooked. It’s nothing more than macro gaslighting, shifting the blame from the interfaces they designed to be this way, to the users.

2- This system is doomed in a world of “alternative facts” and us vs. them ideology. The police will need their own police, ad infinitum.

Let’s try a little thought experiment.

Let’s say I tweet something that’s highly contentious in the public sphere. Maybe I post something for or against abortion, and I’m in America or Haiti (highly contentious in the former, and illegal in the latter).

In Twitter’s case, what do you think will happen if people are allowed to label other people’s posts? Even if they are from a verified group of “Birdwatchers”. Obviously, it will become a game of factions, voting up the commentary that suits your ideological position. Twitter’s own VP of product literally acknowledged this:

“We know there are a number of challenges toward building a community-driven system like this — from making it resistant to manipulation attempts to ensuring it isn’t dominated by a simple majority or biased based on its distribution of contributors”

Now take into account the pervasiveness of “alternative facts” from influential figures on either platform. What do you think will happen when Twitter doesn’t grant “Birdwatch” membership to some of these highly influential accounts? It will only serve to amplify the claims of censorship that prompted these solutions in the first place. If we play this out to its eventual conclusion, it’s easy to see that it’s just kicking the can of responsibility and accountability down the road, resulting in a variation of the same debates and complaints.

Facebook’s case is even more nakedly ineffective. Just like Twitter’s Birdwatch solution, it results in more of “well, who gets to decide who can moderate?
On one hand, at least it’s up to experts, and not the wider community prone to us versus them battling. On the other, the fact that Facebook picked these people itself is barely any different than Facebook making closed door decisions itself. It really does feel like a PR charade to help reduce the scrutiny, without changing anything meaningful at all.

…after four years of unending criticism for being too slow to act on the rise of right-wing populism on the platform, and parallel complaints from the right over alleged censorship, you can see why Mark Zuckerberg, Facebook’s chief executive, was drawn to the idea of handing the thorniest calls off to experts, and washing his hands of the decisions. — Ben Smith for the New York Times

The purpose of things like the Oversight Board and Birdwatch is to push difficult moderation decisions onto someone else, and create a cloud of vapid legal-technical chaff to cover up the abdication. — Rusty Foster for Today in Tabs

Alternatives (real solutions)

What do I mean when I say the “user experience” is inherently toxic?

Simply put, when a business relies on digital advertising, there is a direct connection between the amount of activity, and its revenue. Therefore the businesses (in this case Facebook and Twitter) must do everything in their power to maximize volume. Volume of consuming, sharing, and reacting, and ideally time spent in one session and over time (as habits form; but if you can squeeze more reactions in less time, that’s good, too).

So these platforms are designed to facilitate this proverbial printing of attention economy money:

  • It is extremely easy to join and participate, with zero requirement of verification, or earning your way into the community.
  • It is too effortless to share and react. Absolute lack of friction is a guarantee for noise and carelessness. But, again, that’s the recipe in a business built on volume. There is zero proof of work required.
  • There is nearly zero context to your participation: there are next to no labels to make you think twice about reacting to somebody else’s content, whether about the sources of that content or the track record of the sharer. Today, this is only practiced for election coverage or COVID information. That is all, and it took a long time for even that to be implemented.
  • Being influential is based on volume (so making you produce and participate as much as possible), and this volume is completely unconditional and limitless. It doesn’t really matter how you accomplish it. In other words, neither platform makes “success” conditional on quality, rather than quantity. Just look at the follower counts, the likes, the shares, and the sheer proliferation of bots that help this all along. (If the platform really wanted to solve the bot problem, they could have done so a long time ago.)

It doesn’t have to be this way. It’s not rocket science. Let’s consider what amending the design would look like with just the above 4 examples:

  • Joining Twitter or Facebook, to have a full account with full access, would require you to verify who you are, even if others can’t see it. E.g. submitting government ID, connecting another account, uploading a selfie with piece of mail addressed to you, etc. You could still join without this, but your account would be labeled accordingly and limited. If you needed to be anonymous due to your work for human rights, for example, you could request it. This would be a more appropriate place for an Oversight Board. While it’s true anonymity doesn’t necessarily solve bad behaviour, this approach helps ensure 1 account per person, so that bots / trolls don’t have such an easy in (and re-in once they’re banned).
  • Sharing and reacting could require some proof of work. There would be simple prompts, and you would need to answer at least one of them: “What is this about?” “Why do you think it matters?” “Did you find any other supporting evidence”? As Ranjan at The Margins wrote,

My own personal dream is that, to share a piece of content, the user has to write a Margins newsletter’s worth of analysis on why they are sharing it. Instead of a maximum 280 character count on Twitter, maybe we impose a minimum character count. You all can share whatever you want, but it’s gonna cost you in time and effort.

All public auto-sharing/following/commenting etc would also be banned as a result, with the exception of private, productivity-oriented use cases for verified users.

(On Readocracy, we leaned into this: people can’t share, recommend, or reply unless they’ve actually consumed the content first, and there are no automations allowed.)

  • Regarding context: every post could have a context label showing the person’s history of sources shared, and the tone of their language. Is the person who shared only consuming and sharing from 1 news source? You’ll know. Are they usually using aggressive, misspelled, or loaded language? You’ll know. Is this an account that actually seems to only re-share and support a specific business? You’ll know. Your attention and emotions are precious. You should have some context before you invest either.
  • Being influential should be conditional. To gain maximum visibility you should have earned it by being well-informed and balanced (based on track record and diversity of sources used), and helpful. The context label described above should be flattering when you’ve been good for the community and responsible with your slice of everyone else’s attention. In other words, a system that by design is optimized for quality, not quantity. (If you’re interested, I dug into this at length on our blog: A New Way of Governing The Internet and Rethinking Influence)

Bonus: how this is similar to the core issue with higher education

What does this all have to do with Higher Education?

It has to do with how credibility is defined. Today, we overwhelming rely on proxies that we can’t verify ourselves: an institution gives you a piece of paper and a grade that you get to show off, as an abstraction for your years of work.

In the above systems Twitter and Facebook propose, they are once again relying on others to give a stamp of approval or disapproval. But how do we check ourselves? What systems of reference are we given? Is it made easy for the decisions to be self-evident?

More and more people are convinced Higher Education is in a bubble. A big reason is that it is so detached from real world success. A degree no longer means the guarantee of a job, or being good at that job. In many cases they feel largely unrelated.

As we look to properly solve the issues of toxicity online, and reliability in proxies of credibility, we need systems that are more self-evident, helping us easily make our own informed decisions, rather than relying solely on the interpretations of others.

Thanks for reading. We’ll occasionally cross-post pieces here from our blog. If you appreciated this or found it helpful, let us know by…

  • Applauding
  • Sharing
  • Subscribing on our blog

--

--