Facebook’s real political nightmare: Deepfakes

Video propaganda is a threat they can’t dismiss as “free speech”

Vanessa Camones
The Startup
3 min readOct 21, 2019

--

Deepfake videos aren’t only in the movies anymore
Minority Report saw it coming: The future where you can’t believe what you see.

Facebook is in the hotseat again, this time over political campaign ads with outright lies. But whether they ban all campaign ads (as Josh at Techcrunch argues) or keep them as a must for democracy (which Zuckerberg counter-argues), even the most brazen lie by an official campaign will soon seem old-fashioned. The real opportunity to manipulate people with misinformation is in fake videos, the deepfakes that keep getting harder to tell from an undoctored shot.

Marketers know that videos grab viewers’ attention far more effectively than text or a still image — every real-world test confirms our experience that videos are several times more effective than still images in attracting new customers, bringing in buyers, or simply getting people to click. (Protip: Square videos do better than widescreen clips on social.)

Political ads aren’t a moneymaker for social networks. In 2016, dark money PACs were outspent on advertising by Apple. It’s possible that Facebook and other social giants would spend more than it’s worth trying to police campaign ads, while every mistake at tagging true-or-false brings another black eye. It’s not crazy to suggest banning them all instead.

But campaign ads aren’t the real threat to truth. Viral videos have proven that a rumor or a lie has nothing on a cellphone clip for spreading a story and igniting emotions. If honesty isn’t your policy, that’s awesome. Even 25 years ago, TV news producers were willing to rig videos to get their point across. Today we’re crossing the threshold where anyone can fake a news event without leaving their keyboard.

For all the potential abuses of deepfake video — reputation-wrecking, false evidence in court, or just selling more energy drinks — political propaganda deepfakes are by far the most dangerous threat. You don’t need to be a psychologist to know it, but here’s the American Psychological Association: “Attack ads work.” The best way to motivate people on an issue, say APA’s experts, is to rile them up in anger or fear. We now have broadband for that in everyone’s pocket. Why spend money touting your candidate when you can terrify and infuriate millions into leaning their way with a viral video? Unlike a campaign ad, you don’t need to take the credit.

We can’t leave this to Facebook and Google to fix. We need to pay attention. Deepfake technology is accelerating, the cost is falling, and the most advanced AI guided by deepfake experts at Facebook and Google sill does a terrible job of identifying fakes. Human eyes attached to sharp minds are still the best detectors, but human minds thrive on confirmation bias.

A deepfake doesn’t need to be utterly foolproof to work, as proven by the number of people who explode over blog posts marked “satire” and topped with a deliberately fake-looking Photoshop job. We — even you — are hardwired so that when you see something that affirms what you believe or fear, your brain doesn’t pause to think, “I should check if this is real.” Your brain screams, “I knew it!”

Politicians lie, campaigns lie, and that will never change. What’s changing is propagandists’ growing power to push our buttons. Even more than campaign claims, viral videos hold both promise and threat that we don’t yet how how to handle. The research at Facebook and elsewhere needs to stay a priority. Everyone who makes money routing video fake-facts in a “News Feed” is surely aware that their ethical obligations can’t be dodged by tagging deepfake videos as free speech.

--

--

Vanessa Camones
The Startup

founder & ceo of marketing consulting firm @anycontext and @theMIXagency. Board Member of @BoardSeatMeet @InPlay. #latinatechrealness #LA #SF #PDX