Mark Zuckerberg Still Decides What Users Can Say on Facebook

Facebook’s new oversight board can only go so far to save Facebook from itself and save our society from Facebook.

Joseph Thai
The Startup
4 min readMay 13, 2020

--

Facebook’s anemic responses to the pandemics of hate and misinformation on its platform have drawn criticism for being too little, too late, and too driven by its bottom line. To address such criticism, Facebook will soon launch an oversight board, which its co-chairs claimed in a recent New York Times op-ed will be “completely independent” of the company, with ultimate authority to decide “what content to take down or leave up.”

Don’t be fooled. Given Facebook’s speech policies and governance structure, Mark Zuckerberg unfortunately will retain the final word on what Facebook’s 2.45 billion users can or cannot say on its platform.

Credit: Wizard of Oz via Giphy

The key to demystifying the oversight board is a critical fact the co-chairs fail to mention in their op-ed. As Facebook states in its charter for the board, the sole “basis of decision-making” for the board is “to review content enforcement decisions” of Facebook moderators and “determine whether they were consistent with Facebook’s content policies and values,” not the board’s (italics added). Lest there be any doubt, the charter repeats: “The board will review and decide on content in accordance with Facebook’s content policies and values.”

In other words, the oversight board cannot overrule, revise, or disregard Facebook’s self-written speech policies, even if its members find them extremely ill-advised or harmful.

For example, Facebook’s speech code — what the company calls its Community Guidelines — expressly bans “hate organizations and their leaders and prominent members,” but not rank-and-file members. So while the board may affirm the banishment of a Grand Wizard of the KKK from Facebook, it cannot boot grassroots KKK members who use the platform to connect with like-minded white supremacists.

Furthermore, while Facebook’s speech code forbids threats of “high or mid-severity violence due to voting,” it fails to ban voter intimidation via threats of “lower-severity violence.” So the board can uphold the removal of death threats for voting, but cannot order the removal of posts threatening voters with much less severe violence. The only saving grace here is that the company fails to define what counts as “high or mid-severity violence,” so the board might save Facebook from its own folly by stretching to interpret any physical threats as qualifying.

Still, the oversight board can only go so far to save Facebook from itself or — more to the point — save our society from Facebook.

Most notably, Facebook categorically exempts politicians from its speech code and fact-checking policies. Hence, on the one hand, the board can rule that posts by ordinary users calling Mexicans “rapists” or African nations “shithole countries” warrant removal for violating Facebook’s hate speech policies. Similarly, the board can rule that ordinary users’ posts describing the COVID-19 pandemic as a hoax or promoting the injection of Clorox as a cure deserve fact-checking. However, the board lacks power to revoke the license Facebook has given politicians to freely spread virulent hate and hazardous falsehoods on its platform.

To be sure, there is room for debate over Facebook’s speech policies, including its special treatment of politicians. For instance, Facebook has defended its special treatment of politicians on the ground that it should not “prevent a politician’s speech from reaching its audience and being subject to public debate.” On the other hand, as I have argued, Facebook deepens partisan divides rather than broadens public debate. The company leverages its deep user knowledge and machine learning to migrate users into skewed content bubbles where they get fed personalized news and “alternative facts” that pleasingly reinforce their own views. The board certainly has no say over the algorithms that Facebook employs to keep users in what a former vice president for growth described as “dopamine-driven feedback loops.”

Moreover, whether or not the board agrees with Facebook’s speech policies, it is obliged to follow them. And while the board may recommend that the company make changes to its policies, Facebook makes clear in the board’s charter that such recommendations are only “advisory.”

Even worse, while Facebook has committed to implementing the board’s decisions regarding particular content disputes, nothing stops the company from revising its policies for the future if it does not like how the board has interpreted or applied them in the present.

The oversight board’s independence is compromised in other serious respects. First, to populate the initial board, Facebook itself hand-picked the board’s co-chairs, and those co-chairs and Facebook will jointly fill the remaining board seats. Second, the board may not change its charter — say, to gain greater independence — without Facebook’s agreement.

Finally, as the company’s CEO, chair of the board of directors, and controlling shareholder, Mr. Zuckerberg still has the last word on the company’s policies. He thus retains ultimate censorial power over what nearly a third of humanity can or cannot say on Facebook.

So, notwithstanding the hype over Facebook’s new oversight board, Mr. Zuckerberg remains the wizard behind the curtain.

--

--

Joseph Thai
The Startup

Watson Centennial Chair and Presidential Professor, University of Oklahoma College of Law. https://www.law.ou.edu/directory/joseph-thai