Children’s Health Defense v. Facebook Seeks to Outlaw Public-Private Online Safety Efforts

Elizabeth Banker
Chamber of Progress
6 min readDec 23, 2021

Government officials pressure online services over their content policies every day. That pressure reflects public opinion; polling shows two-thirds of adults want to see stronger content enforcement on social networks.

But right now, the relationship between government and social media is at the center of a case pending before the Ninth Circuit — one that could stop platforms from making their sites safer, even when there’s a public appetite and consumer demand to do so.

The appeal pending before the Ninth Circuit Court of Appeals comes from a lawsuit brought by the anti-vaccine organization Children’s Health Defense (“CHD”) against Facebook. If CHD has its way, every time governmental pressure and provider action align it will become “state action” subject to First Amendment restrictions.

In other words, private companies would be bound to the same rules as the U.S. government for moderating online content, substantially limiting the steps that providers would be able to take to make their platforms safer. The result would be a dramatic increase in the amount of harmful and abusive content online.

Case Background

After Facebook took steps like labeling and demonetizing vaccine misinformation on their page, CHD sued claiming that Facebook’s actions violated their First Amendment rights. The Ninth Circuit previously rejected this type of argument in Prager University v. Google.

CHD’s complaint attempts to avoid the same fate by adding allegations that Facebook’s content moderation actions qualify as state action rather than private action. They argue that because government officials encouraged Facebook to take more aggressive action to enforce their terms of service provisions on health misinformation that the actions should be treated as though they were performed by the government itself.

The district court rejected this argument and granted Facebook’s motion to dismiss. CHD appealed to the Ninth Circuit.

CHD Attempts to Make Content Moderation State Action

CHD is asking the Ninth Circuit to reverse the district court decision and allow their lawsuit to move forward. But CHD faces an uphill battle to establish that Facebook’s actions should be treated as state action.

CHD relies, in part, on the argument that Facebook’s actions are “joint action” with Representative Adam Schiff, the Centers for Disease Control, and the World Health Organization. To convert a private actor like Facebook into a state actor based on joint action, CHD must show a “substantial degree of [government] cooperation,” such that the federal actors are “responsible for the specific conduct of which the plaintiff complains.”[1]

According to CHD, the following are evidence of joint action:

  • Statements by the CDC that it “partners” with social media companies to “contain the spread of misinformation.”
  • Statements by Facebook that it works with the CDC and health experts to remove health misinformation.
  • Allegations that White House personnel flagged specific posts or accounts.
  • Some of Facebook’s actions against CHD came after receiving a letter from Congressman Schiff.
  • Facebook began directing users to CDC information after forming a “partnership” with Dr. Fauci.

Facebook argues that there are no allegations that these activities resulted in the specific actions against CHD and that regulatory interests are not a sufficient basis for converting private decisions into government decisions.

Online Services Makes Independent Decisions

Online services are under significant pressure from all directions to improve or change their content moderation practices, including from governments. There is no evidence that the services have succumbed to the pressure and given up making independent decisions about what is best for its service, its users, and the company. In fact, the evidence suggests the opposite. For example, former President Trump, while in office, met with Twitter CEO Jack Dorsey to complain that he lost followers after Twitter purged fake accounts across the product. Twitter defended its enforcement actions.

This is confirmed by provider transparency reports. Social media companies provide data that provides insight into the percentage of content removed in response to a report from a U.S. government entity or official.

% Government notices resulting in action (US) by company

Meta (f.k.a. Facebook): 10% [3]

Twitter: 44% [4]

YouTube: 5% [5]

The best evidence of the independence of Facebook’s content moderation decisions is found in CHD’s complaint. It is riddled with statements about Facebook exercising “viewpoint discrimination” and acting in its own self-interest. These claims cannot be reconciled with an argument that Facebook was somehow coerced into taking an action it otherwise would not have. The more accurate characterization is that Facebook took actions based on business considerations, which sometimes aligned with government officials desires and other times did not.

For example, CHD’s original complaint alleges that CHD’s views were censored because they compete with Facebook’s business plan for pharmaceutical ad revenue, vaccine development, and 5G and wireless networks[6]; Rep. Schiff provided Facebook with “cover” for its own ulterior business motives[7]; and Facebook and Zuckerberg “purport to ‘arbitrate the truth’ of open scientific controversies when doing so advances their business interests.” [8]

Public-Private Partnerships are Critical to Online Safety

The essence of CHD’s claims are that a private company met with government officials and non-governmental organizations to discuss a problem, exchange information, and pledge “cooperation.” If this is enough to convert content moderation into state action, it would impair the ability of online services to continue to:

  • Remove accounts and posts suspected of selling or promoting illegal drugs, and direct users to information from the FDA, DEA, and other reliable sources on the legality of buying drugs online, availability of local drug take back days, and addiction recovery resources.
  • Meet with election officials, promote reliable information on how to vote, and remove “text to vote” and other voting misinformation.
  • Meet with consumer protection regulators to learn about how to spot the newest online scams and to update algorithms to detect scams based on that information, such as the newest scams targeting student loan payments.
  • Discuss foreign disinformation campaigns with intelligence officials and obtain information that can assist in detecting state-sponsored activity.

Any time a private service moderated content in these areas and that content was entitled to First Amendment protection, the provider could be sued.

There are numerous examples of public-private partnerships to address conduct where governments and private actors align on the need for further action — consider the FDA’s Online Opioid Summits, the Five Countries Voluntary Principles to Counter Child Sexual Exploitation and Abuse, and the Christchurch Call.

These types of partnerships allow sharing important intelligence, threat information, and a better understanding of government and company roles. They may result in company pledges of cooperation, promises to do better, and assistance outside of terms of service enforcement (such as donations of advertising space to deliver public service messages).

What these cooperative efforts don’t do is co-opt private company decisions on specific content moderation decisions. This is why the Ninth Circuit is likely to reject CHD’s argument and reaffirm the important First Amendment rights of platforms to make their services safer.

Endnotes:

[1] Facebook Motion to Dismiss, p. 7 (citations omitted).

[2] This action was specifically used by Gov. DeSantis to draw attention to the legislation as proof of why it was needed. Social media regulation legislation, SB 7072, was passed by the Florida Legislature in Spring 2021. Industry challenged the constitutionality of the law and a preliminary injunction was issued to prevent it taking effect. The state is now appealing the decision to grant the injunction.

[3] In the first half of 2020, Meta reported 3 notices ending in 3 removals for Instagram and Facebook. Meta later updated the report to note that 27 additional notices of illegal activity were received, which were initially actioned but reinstated after further review against the applicable terms of service.

[4] Twitter noted that this figure represents a 75% drop in its rate of removals under the Twitter TOS for government reported content.

[5] This figure is based on the number of items YouTube received government notices on and the number of items that YouTube took action on for policy reasons (as opposed to legal). It does not include the reported category “third party court orders.”

[6] CHD Complaint, p. 4.

[7] Id. p. 1.

[8] Id., pp. 79–80.

The Chamber of Progress (progresschamber.org) is a new center-left tech industry policy coalition promoting technology’s progressive future. We work to ensure that all Americans benefit from technological leaps, and that the tech industry operates responsibly and fairly.

Our work is supported by our corporate partners, but our partners do not sit on our board of directors and do not have a vote on or veto over our positions. We do not speak for individual partner companies and remain true to our stated principles even when our partners disagree.

--

--

Elizabeth Banker
Chamber of Progress

VP, Legal Advocacy at Chamber of Progress. Adjunct law prof UC Hastings, ex-GTown. Work focuses on intersection of law and policy. Fmr. Twitter, Yahoo!