Facebook CEO Mark Zuckerberg Testifies At House Hearing. Photo by Chip Somodevilla/Getty Images

Facebook’s History of Prioritizing Profits over Privacy & Safety

Gretchen Peters
Alliance to Counter Crime Online
4 min readSep 14, 2020

--

At the Alliance to Counter Crime Online, our 40 members track all sorts of toxic and illicit content found on social media, from Mexican drug cartel activity and child sex trafficking to the illicit trade in endangered cheetahs. The majority of it can be found on Facebook platforms.

Our global team of researchers meet virtually to compare findings, and we began noticing certain patterns in how Facebook responds to inquiries about toxic content on its platforms.

We began tracing how Mark Zuckerberg has responded to the dangers his products cause, dating back to the very start of the firm in 2004. What we found is that his playbook hasn’t budged. The Facebook founder created a corporate culture that was bent on growth from the start, with a perilous disregard for user safety or privacy.

Today, ACCO is releasing an interactive timeline that brings together the relevant history. It’s a critical read for regulators, lawmakers, advertisers and users of Facebook products. We identified three core patterns:

1. Zuckerberg and other Facebook executives have claimed they were too “idealistic” to imagine their platform would produce negative outcomes. That is inaccurate. There’s documented evidence that the Facebook founder was aware of key privacy and safety problems from the very launch of his platform, and just didn’t care. Our timeline features leaked chats in which he even offered to share student passwords and other personal data with friends.

Furthermore, Zuckerberg responded to security problems only when exposed in the media, not when he was initially made aware of them. In 2005, for example, he received an email from another student warning him about a security problem risking personal data at 435 colleges. Zuckerberg responded immediately to the email, but didn’t bother to fix the code until a month later, after the student issued a press release about the flaw, and college newspapers reported on it.

This pattern repeated itself during the Cambridge Analytica scandal. The Securities and Exchange Commission case revealed that Facebook first became aware that Cambridge harvested the data of 50 million Facebook users without their permission for years in 2015. Yet Facebook did virtually nothing to remediate the situation until The Guardian and New York Times jointly exposed the story in March 2018.

2. Facebook regularly puts out misleading data to boost profits. In the most egregious example, a lawsuit in the California district courts has exposed how Facebook has been amplifying its “potential reach” in key markets to numbers higher than the actual population. Facebook’s attorneys have made the comical defense that the term “potential” should absolve the firm of any liability. Discovery revealed that senior executives, including COO Sheryl Sandberg, worked to cover up this problem, with one Facebook team member asking in a 2018 message, “how long can we get away with the reach overestimation?”

3. Facebook executives deploy the same three responses when questioned about toxic content. The first is what we call the 99% Myth. When confronted with questions about terror, drug or even child sex abuse content, Facebook executives utilize a carefully-worded formula, saying that, “Our AI finds 99% of the [insert-toxic-content-type] we remove from our platform.” The firm has parroted this line in Congress, in SEC reports, on investors calls, and to the media related to terrorism, drugs and child sex abuse content. The 99% figure is often interpreted to refer to the overall rate of removal, but it’s a fundamentally meaningless statement. I could say that 99% of the teeth I floss don’t have cavities, and still have a mouth full of rotten teeth. Moreover, our research, as well as testimonials we have received from former Facebook employees, indicates the firm’s overall rate of removal is no more than 30% at the best of times. We’ve also tracked how Facebook has lobbied behind the scenes to limit industry-led efforts to reduce the spread of drugs and child sex abuse content.

A second response is the promise to hire more moderators. Human moderators have waded through the worst aspects of humanity since the earliest days at Facebook, while being treated like second class citizens, with pay that is a fraction of what engineers earn and far-from-equal benefits. However, even though Facebook has greatly increased its moderation force since 2018, there continue to be precious few moderators compared to the number of users logging onto its platforms (even if the real number of users is lower than the firm claims). To put the ratio in perspective, consider this: Most developed countries employ about three security providers per thousand people. Facebook, by comparison, employs about one moderator for every 90,000 people on its platforms. Most have scant training or experience in countering organized crime and terrorism.

Lastly, there’s what we call the “Facebook pivot.” Anytime firm executives are grilled about toxic content, they promise to do better, and then quickly shift the conversation to wax poetic about the many benefits social media brings to society, usually inserting a colorful anecdote about a single mom who started a business, or a social justice movement that went viral. These positive outcomes are no doubt accurate, but they don’t absolve the firm from responsibility for the harm its products also cause.

The cost of Facebook’s irresponsible corporate practices is always born by its users. For us at ACCO, the most frustrating part of all this is how easy it would be to fix many of these problems. No doubt the fixes would reduce profitability and growth for Facebook, but the tools exist to greatly reduce toxic content, if the firm’s leadership ever resolves to value public safety over profits.

Gretchen is executive director of the Alliance to Counter Crime, which groups more than 40 academics, NGOs and citizen investigators determined to eliminate crime on social media.

--

--

Gretchen Peters
Alliance to Counter Crime Online

Gretchen is Executive Director of the Center on Illicit Networks and Transnational Organized Crime and the Alliance to Counter Crime Online.