Facebook’s oversight board: The case for cautious optimism

Jesse Blumenthal
TheUpload
Published in
10 min readMay 6, 2020
Photo credit: Unsplash

In 2018, Mark Zuckerberg posted a nearly 5,000 word update on the core challenge for Facebook: content moderation. Zuckerberg announced that Facebook would bring together an independent oversight board to arbitrate pleas from users who felt their content was erroneously censored.

“As I’ve thought about these content issues, I’ve increasingly come to believe that Facebook should not make so many important decisions about free expression and safety on our own. In the next year, we’re planning to create a new way for people to appeal content decisions to an independent body, whose decisions would be transparent and binding. The purpose of this body would be to uphold the principle of giving people a voice while also recognizing the reality of keeping people safe.”

Since the announcement, Facebook has provided periodic updates on the progress of the project. Today, the company announced the board’s launch and initial membership.

But is the Facebook oversight board a good idea? Having spent years following this process, my preliminary conclusion is that it could be. So here is the case for cautious optimism, as I see it.

Let us start with what should guide the decisions of privacy companies like Facebook. Recently, my colleagues at Stand Together created a series of principles American technology companies ought to apply to their work. The first set of principles we outline pertains to free speech. Free expression and association are essential to our society. Companies are and should be free to set appropriate rules that serve their consumer base. That said, they should prefer more speech wherever possible. They should be clear about those rules and consistent in how they are applied. And, crucially, companies have a responsibility to resist lobbying efforts by government and other special interests that seek to limit speech by laundering their power through these private firms.

Facebook, like virtually any other social media platform, is vulnerable to this kind of lobbying. Undoubtedly, the oversight board will face a similar pressure campaign. It might exacerbate that challenge for the company.

Simply because we at Stand Together believe companies ought to value free expression does not mean that they do. Many companies pay lip service to broader values. How do these values play out in Facebook’s choices and actions?

The values that drive Facebook

It is clear that these questions are top of mind. In his 2018 post, Zuckerberg begins by noting:

“Many of us got into technology because we believe it can be a democratizing force for putting power in people’s hands. … At the same time, we have a responsibility to keep people safe on our services — whether from terrorism, bullying, or other threats. We also have a broader social responsibility to help bring people closer together — against polarization and extremism. … An important question we face is how to balance the ideal of giving everyone a voice with the realities of keeping people safe and bringing people together. … Who should decide these policies and make enforcement decisions? Who should hold those people accountable?”

Over the last few years, Facebook has increasingly emphasized the free expression side of this balancing test. Indeed, it’s fair to argue that free expression is now the paramount value for Facebook.

In a widely covered speech he delivered at Georgetown University last fall, Zuckerberg outlined why it was crucial his platform uphold the values of free expression:

“In the face of these tensions, once again a popular impulse is to pull back from free expression. We’re at another crossroads. We can continue to stand for free expression, understanding its messiness, but believing that the long journey towards greater progress requires confronting ideas that challenge us. Or we can decide the cost is simply too great. I’m here today because I believe we must continue to stand for free expression.”

He expressed a similar sentiment at a conference in Utah a few months later, saying “at some point, we’ve got to stand up and say, ‘No, we’re going to stand for free expression.’ Yeah, we’re going to take down the content that’s really harmful, but the line needs to be held at some point.”

This shift in language represents a real shift in thinking. In the 2018 formulation, free expression, or what some inside the company refer to as “voice,” was one of many values that needed to be balanced against competing interests such as safety or the universality of Facebook’s rules. Over time, it seems free expression is being elevated as a value — in ways that could be contrary to its own short-term business interests.

For example, Facebook has faced significant external pressure to ban political advertising. To their credit, they have resisted these calls. Zuckerberg noted on an earnings call that “from a business perspective, the controversy this [ad policy] creates far outweighs the very small percentage of our business that these political ads make up.” To put that in perspective, he noted the $5 billion fine levied on the company by the FTC was 10 times greater than the 0.5% of its revenue that comes from political advertising.

Political speech is just one small area where this trend is playing out. But while Facebook is elevating speech, a growing chorus from abroad is calling on Facebook and other intermediaries to regulate lawful speech. And increasingly, those calls are becoming common in the United States.

On the left, many believe that Facebook is undermining truth, democracy, the news media, and more. On the right, many argue that conservatives are unfairly censored and do not have a level playing field with their liberal counterparts. Just take these recent comments from Senator Josh Hawley or these statements from House Speaker Nancy Pelosi. They may differ on the problem but seem to agree on one thing: Government should be setting online speech standards.

These arguments raise an important question about how the rules set by private companies ought to be enforced. And that brings us back to the Stand Together tech industry principles that I mentioned earlier and Facebook’s new oversight board. We encourage companies to create standards on free expression that are clear, consistent, and accessible. Key to this is enforcing these rules in an equitable and transparent manner. And enforcing consistent rules in an equitable way means you are going to upset people from across the ideological spectrum.

Here is the important thing to remember when it comes to enforcing these rules. All content moderation is flawed and imperfect. It is an inherently impossible task. And I’d encourage you to read more about how massive the scope of this challenge is and how the company responded to major events like the Christchurch shooting.

Facebook’s billions of users make billions of posts every day in more than a hundred languages. At the same time, Facebook’s content moderators are asked to review more than 10 million posts per week. Think about that for a moment. How much time do you think Facebook moderators have to make a decision in the face of a sea of content they are being asked to review?

As Kate Klonick, a law professor and expert on content moderation noted to Motherboard, “This is the difference between having 100 million people and a few billion people on your platform. If you moderate posts 40 million times a day, the chance of one of those wrong decisions blowing up in your face is so much higher.”

There are two interrelated issues: what rules do you set and how do you enforce them. The former is a deliberative choice, the latter is an impossible task that can be refined at the margins. Still, it is clear where Facebook stands on those questions. What is less clear is whether the new oversight board will embrace or resist this shift in values.

Indeed, only five of the initial board appointments come from the United States. The other 15 come from abroad. That is some cause for concern because even the strongest free expression advocates abroad tend to place a lower emphasis on free expression than their American counterparts.

In many discussions about the board both Facebook staff and external activists have pressed the company on geographic representation. This priority is reflected in the bylaws and regularly taken as a given. But by emphasizing geographic diversity the board risks shifting away from a uniquely American approach to free expression.

Take for example the 2005 case in which Danish newspaper publisher Flemming Rose caused a global controversy with a request to draw the prophet Muhammad. Around the world this sparked an intense debate, but less so in the United States. In the American context, drawing an offensive cartoon is not a particularly close call. After all, we are home to South Park, The Simpsons, and Family Guy.

That is not to say that international law and norms are silent on questions of free expression; rather, they are often overlooked. Professor Evelyn Aswad of the University of Oklahoma College of Law, a former director of the human rights office at the U.S. Department of State and member of the Facebook oversight board, has argued persuasively that social media companies and governments (especially liberal democracies) need to live up to their promises and obligations under international law regarding freedom of expression. Prof. Aswad and her colleagues on the oversight board are in a position to hold Facebook to the standards laid out in international free expression law. In doing so they can provide a model for countries around the world to follow.

Different companies, different models of moderation

It is easy — too easy — to spend so much time thinking about how a company like Facebook wrestles with these questions at scale. Doing so you risk forgetting that once you get beyond Facebook and Google (including YouTube), the process of content moderation looks decidedly different.

Roughly speaking, there are three approaches: the big guys, everyone else, and the road not (yet) taken.

For the big guys, Google and Facebook, they spend tens of millions of dollars and hire tens of thousands of people. But that is a reflection of their scale and is not a realistic option for platforms that are newer to the market.

Everyone else is left with a mix of automation, user-flagging, and far smaller content moderation teams. Consider smaller platforms like Reddit and Medium that do not have the financial resources to set up comparable systems. Instead, they have developed relatively light meta-rules to moderate content. Reddit defers to local moderation by users. Medium has a robust internal system of review and appeal, but does not have the staff to moderate on the scale of Facebook or Google.

Then there is the road not yet taken in content moderation, which would rely on protocols rather than platforms. Techdirt’s Mike Masnick laid out this proposal in a piece last summer for Columbia’s Knight Institute. The proposal calls for users, rather than companies, to set their own preferred content moderation policies, effectively customizing their experiences.

The idea has attracted some notice. Twitter has created a group called Bluesky tasked with developing an open-source decentralized internet standard. In Jack Dorsey’s own words, “centralized enforcement of global policy to address abuse and misleading information is unlikely to scale over the long-term without placing far too much burden on people.”

As Dorsey notes, pushing decision-making to the edge of the network helps decentralize risk as well as control.

“Moving to protocols, not platforms, is an approach for free speech in the twenty-first century. Rather than relying on a ‘marketplace of ideas’ within an individual platform — which can be hijacked by those with malicious intent — protocols could lead to a marketplace of ideals, where competition occurs to provide better services that minimize the impact of those with malicious intent, without cutting off their ability to speak entirely.”

This brings us back to the question I raised earlier: How should we think about Facebook’s new oversight board?

In my view, the board is best understood as an important add-on to the systems outlined above. The charter and bylaws reflect deep thought about how to create structural independence, while still relying on the company to handle aspects of moderation. Content moderation is not a perfect science and perfection is not possible but systems can be improved. From that standpoint, Facebook should be commended for trying this experiment.

But will it work? Maybe

Ideally the oversight board will create a meaningfully independent form of private governance. And not just independence from Facebook but also from the host of outside lobbies who will inevitably begin to pressure it.

The board will use random, anonymous panels of five to hear cases with at least one representative drawn from the geographic region of the case. Anonymity and privacy can enable freer speech and association because governments often attempt to pressure those with whom they disagree. But how will these judges use their newfound freedom? It is worth noting that the judges have the power to change this system of anonymity in the future if they choose.

The board cannot be immune to lobbying efforts by government and other special interests seeking to limit certain forms of speech. Now that their names have been announced, I am sure that researchers and activists will begin cheering and jeering the choices. Will the board members stand up to the inevitable pressure?

The board also now becomes self-perpetuating, forming a membership committee to pick its own colleagues to join the initial slate of 40 judges and choosing its own successors. I will be interested to see who the board itself picks as its colleagues. Those choices will also be telling.

And how will Facebook respond? It is encouraging to see the company’s leadership increasingly embrace the preeminence of the value of free expression. This board represents a novel experiment in private governance. The development of the board has clearly been a thoughtful process, with many opportunities for public feedback. But the real test is not crafting optimal bylaws. It is the public reception and legitimacy of the board’ decisions within Facebook and outside of it. What happens when controversial cases came up? And more importantly — who will public (and media) attention turn to when a politician or constituency is upset by a decision. Facebook? The board?

At this point, it seems like there are simply more questions than answers. I do not know if this experiment will work but I am glad to see it tried. As the board is constituted, begins to hear cases, and picks its colleagues, I hope its members will take to heart the lessons that Facebook has clearly learned over the past several years. Free expression is valuable, especially when it is unpopular. A company in the business of connecting the world is well-served by having voice be its preeminent value.

Defending free expression is rarely popular, but it is the right thing to do.

--

--