The Facebook Conundrum

The not-so-supreme court is unlikely to solve Facebook’s or the world’s real problems.

Kyle Dent
Kyle Dent
May 4 · 7 min read
Facebook logos are displayed on the screen of the smartphones.
“The Oversight Board was created to help Facebook answer some of the most difficult questions around freedom of expression online” (Photo by Alexandra Popova / Shutterstock.com)

It has become obvious to everybody except apparently Facebook that not all engagement is good engagement. The company knows that its automated recommendations contribute significantly to the growing divide and extremism in the US, but their singular focus on maximizing user engagement means they’re not likely to do anything about it except around the edges. Enter the new Facebook Oversight Board. Last month Facebook expanded the board’s very limited scope to now allow it to weigh in on decisions to leave up content that potentially violates community guidelines.

I’ll have more to say about the board shortly. But first, it’s well worth reading both Karen Hao’s recent article in MIT Technology Review and Julia Carrie Wong’s piece at The Guardian to understand the internal tensions within Facebook. Keep in mind that Facebook is a for-profit advertising company. Their main product is the human attention they sell to other companies. Harvesting that attention means keeping users engaged, and to a hugely growing extent AI recommendations based on sophisticated, multi-dimensional models is their tool of choice to capture and direct that attention.

The Toxicity of AI

In fact, Facebook researchers develop many different kinds of AI models for various purposes. For example, some models might adjust how posts are prioritized in users’ feeds while others might look for violations to Facebook’s community standards. Regardless of a model’s goals, in all cases it is measured for its impact on user engagement. Models that decrease people’s

engagement are almost always killed and those that increase it are highly favored. But as Hao explains in her article, “The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions.” Wong’s article shows the scope and scale of the damage from those tensions, which are being weaponized by bad actors to exactly that end.

So, tickling our amygdalas is really good for business but seriously bad for the world. Hao’s article covers the details of technology development at Facebook, a process that unwittingly seems to produce rivaling algorithms. The posting of harmful content kicks off a kind of demented race between filtering models that consider blocking it and user engagement models that want to promote it. If the filtering model takes a pass, the next model is very likely to recommend it to somebody. Of course, if there wasn’t toxic content on the platform, Facebook’s recommendation models wouldn’t be so damaging to the social fabric. Believe me when I tell you that I’m saying this not to excuse the craven disregard for ethics that allows this to go on, but to point out that Facebook’s biggest problem is the prodigious amount of hate speech, abuse, and disinformation posted on their platform as well as the resulting harm to Facebook users and the world generally.

Oversight Board Does Not Reign Supreme

With a cursory awareness of the new Oversight Board, you might think that it was designed to help solve exactly this problem. You would be wrong. While it might, in fact, provide some real but very narrow benefit to Facebook users, the board is not a Facebook “Supreme Court” even though it’s frequently analogized that way. That comparison suggests a high court with enormous discretionary power that can make final judgments on policies or overrule decisions made by the “lower courts”, meaning Facebook’s fifteen thousand or so content moderators. This board specifically does not set policy, and its decisions do not establish precedent. Up until last month’s change, its remit was exactly one thing — consider appeals from people who object to their posts being removed. As of last month, the board will also hear appeals from people who believe content should be removed after Facebook moderators have decided it should stay up.

Facebook has reported that each day approximately two hundred thousand posts are eligible for appeal. When fully staffed, the Oversight Board will have forty members. Cases are considered by five-person panels whose decisions are then ratified by the whole board. Between its launch last year and its first announcements in January 2021, the board reported their decisions on five cases. We can assume that number will increase somewhat as they find their groove, but based on my own rough back-of-an-imaginary-envelope calculation, the number of appeals handled by the board is an infinitesimal fraction of all the suppressed speech that should have been left up or toxic content that should have been taken down.

Arguably the board can make decisions on high-profile issues from a more neutral, considered, and transparent point of view. The board is composed of serious people who I believe want to do good things. But by considering one case at a time, the goal is clearly not to solve the massive content challenges throughout the social network. It’s great that the board can fix the handful of cases they consider, but the value to the community is low. The value to Facebook the company is much clearer.

Passing the Buck

If you already know about the board, you’re probably aware that they are currently considering what may end up to be one of their most consequential decisions — the ban on Donald Trump (they’re due to announce their “verdict” today). Facebook made the decision to suspend his account in the aftermath of the attack on the U.S. Capitol. With the threat of more violence, it made sense to silence an outsized voice prone to incite people’s worst instincts. Now that things have calmed down, however, Facebook is left with figuring out what to do about Donald Trump. This dilemma provides the perfect example of the benefit Facebook the company derives from the board. Unbanning banned users is by design outside the normal purview of the board. Among other things, it does not consider questions on political advertising, the company’s algorithms, or the banning of users unless these questions are referred by Facebook for their consideration. It’s a convenient mechanism for Facebook to pass off extremely charged questions, and as a nice little side bonus, it might even help keep the regulators at bay.

In an almost equally divided country, dealing with Donald Trump is a no-win situation for Facebook. The Oversight Board makes that problem go away for them. It’s not necessarily a bad thing that Facebook hands off certain decisions, but it highlights how the upside is for the company and not necessarily for their users. The value of the board is clearly the PR benefit it provides to Facebook.

So what does the board offer you as a Facebook user? You can appeal the decision if your post has been taken down by moderators, and if you’re among the handful of cases deemed “emblematic” or otherwise lucky enough to be considered by the board, you should get a fair hearing of your case. This is minimal, but it is important. Following the attack on the U.S. Capitol, social media companies tipped their hands to show just how much power they wield over free expression online. Attempts to mitigate the potential for ad hoc and arbitrary decision making is a good thing. And, in fact, of the five cases the board has heard so far, they reversed the moderators’ actions in four.

Hoping for a Better Board

It would seem Facebook is taking this board seriously. They have provided $130 million to an independent trust that manages the board, and they have recruited serious and credible members from all over the world with the experience and enough gravitas to make weighty decisions about people’s rights. The members include a former prime minister and a Nobel laureate, for example. These are intelligent and thoughtful people. The board’s decisions are supposed to be binding, and they can even overrule Mark Zuckerberg on questions within their limited scope.

As I mentioned, the first batch of decisions was released last January. The four out of five reversal rate would seem to signal their intention to be independent from the company. While they are getting some flack for their decisions, I’m encouraged by their reasoning. In all cases, the context of the message was a major factor in their reversal decisions. Their explanations would indicate that this board prioritizes freedom of expression.

In one example, they reviewed a case dealing with misinformation in France related to hydroxychloroquine as a cure for COVID-19. Facebook moderators removed the post for violating its misinformation and imminent harm rule. The board reasoned that while the post did contain misinformation, imminent harm was unlikely since hydroxychloroquine is available with a prescription only. The panel’s view was that the original poster who was criticizing the French agency that regulates health products was opposing a government policy with the idea of changing that policy. In that context, they determined the content is protected speech. I happen to disagree with this decision. Since there have already been deaths due to misunderstandings about hydroxychloroquine, in my mind this post meets the standard for Justice Brandeis’ test for prohibited speech that is ‘inimical to the public welfare,’ but I appreciate the panel’s thought-through reasoning, and their inclination for latitude towards political speech.

It’s still early days, and as with most noble ideas, the devil is in the details. Many of the particulars are still to be seen. The board’s first batch of decisions included several recommendations to Facebook. While the rulings on appeals are supposed to be binding, these recommendations are not. The board’s message was clear, however, that more transparency and due process for users is necessary. A lot will depend on how Facebook engages with the board’s recommendations over time. If it is to have any significant impact and benefit to Facebook users and the world generally, the board will have to expand its influence and aggressively protect its independence from Facebook.

CheckPoint

Sharing ideas for quality information and building healthy communities online.

CheckPoint

A publication of CheckStep providing content moderation at scale to online communities. We publish articles related to disinformation, fact-checking, online moderation, and free expression.

Kyle Dent

Written by

Kyle Dent

Kyle Dent is the head of AI and Ethics at CheckStep. He writes about technology and society.

CheckPoint

A publication of CheckStep providing content moderation at scale to online communities. We publish articles related to disinformation, fact-checking, online moderation, and free expression.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store