The Real Takeaway from Zuckerberg’s “Security Manifesto”

Justin Sherman
The Startup
Published in
4 min readSep 16, 2018

On September 12, Facebook CEO Mark Zuckerberg published this article on his own platform, declaring that he’s “writing a series of notes” outlining how he thinks about “the most important issues facing Facebook — including defending against election interference, better protecting our community from abuse, and making sure people have more control of their information.”

It was titled “Preparing for Elections,” and shortly after its publication, TechCrunch ran what I can only describe as a puff piece on the “epic security manifesto,” taking Zuckerberg’s intuitions, explanations, and projections at face value — for instance, boldly asserting that Zuckerberg is “ready for war” against election interference. But that’s not what you should take away from reading Zuckerberg’s article.

Instead, we should know that the platform is apparently fighting itself. Social media companies like Facebook make their money through technology that can predict and manipulate human behavior, which means that weakening this technology runs in tension, in many ways, with their own incentive structures — diminishing this capability for third-party, malicious actors while trying to still enhance that capability for the company.

As The Guardian’s Olivia Solon wrote back in 2016, “Facebook’s business model relies on people clicking content regardless of veracity, and preventing any of that sharing interferes with core user behavior.” Phrased another way: Confirmation bias is not a Facebook problem; it’s a human problem. Those populations who are most susceptible to fake news likely are because their beliefs are simply being confirmed. But when your money comes from keeping people staring at your platform — as in the case of Facebook — there aren’t exactly glaring incentives to fight this confirmation bias, either. (This is conceded in the article, when Zuckerberg discusses adapting advertising policies — but not fundamentally changing how those ads are displayed.)

Josh Constine wrote something similar (for TechCrunch, actually) a year later, when commenting on Facebook’s then-announced plans to curb election interference: “Scale can’t be an excuse. Programmatic ad buying that doesn’t go through human sales people is what’s allowed Facebook to grow so large and profitable. Those profits must be reinvested into both human and algorithmic safeguards against abuse.”

When your entire organization is built on collecting and analyzing individuals’ data, and then using those analyses to maximize engagement with each user, there is certainly some cost-benefit at play when fake news, trolling, and hate speech permeate your platform’s discourse: Is the harm so bad that it’s worth reducing the efficacy of your money-making algorithms? As Zuckerberg himself said,

These issues are even harder because people don’t agree on what a good outcome looks like, or what tradeoffs are acceptable to make.

Mass social engineering enabled through online platforms is a near-existential, if not existential, threat to democracy and free, independent discourse. This issue certainly isn’t new to many of the world’s nations, whose attempts at legitimate elections have been manipulated by the world’s powers for decades, even centuries. However, this issue is receiving more attention than ever on the corporate side — as multinational conglomerates like Facebook and Google are forced to confront the questions of truth that arise on and through their platforms.

To be clear, I’m not saying Facebook isn’t trying, and I’m not saying Zuckerberg is full of it, either. He articulates clear steps the organization has taken within the article, such as taking down fake accounts, using nonpartisan fact-checkers, and hiring more security professionals. (Although, I would note, claims of “making sure people have more control of their information” are likely bogus without strong regulation like GDPR; companies almost never care about data privacy absent government influence.)

My point is simple: Systems that predict and manipulate human behavior are essential to social media platforms and modern advertising practices. If those very systems amplify harm at the same time as they generate unfathomable profit, claims that a company will self-regulate should be at the very least questioned — if not labeled as patently false. When democracy and free, legitimate discourse hang in the balance, it’s not worth taking the chance.

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by +369,262 people.

Subscribe to receive our top stories here.

--

--