Facebook’s Privacy Shift Will Make It Harder To Stop Fake News

Pressland Editors
News-to-Table
Published in
6 min readMar 13, 2019

Mark Zuckerberg’s shift looks good for advertisers, but may simply put misinformation behind closed, locked doors.

“Closer networks condense more falsehoods as they help to hide their authors.”

By Tyler Kingkade

Mark Zuckerberg set the technology world spinning last week when he announced that Facebook — the social networking giant that groomed us to share our lives online — would shift toward private, encrypted communication. People tried to figure out whether Facebook was simply responding to growth by competitors, trying to stay relevant to young people, merging its data with the company’s other apps or fueling an e-commerce play.

To experts who study the spread of misinformation, however, there was a more cynical reason: to short-circuit the fury over Facebook’s fake news problem.

“End-to-end encryption is good for privacy, but it means even Facebook can’t see what the messages are,” Melissa Tully, a University of Iowa professor who researches and teaches about misinformation, told News-to-Table.

If Facebook users shift to private communication — whether it be group chats, direct messaging or closed groups — it’ll be harder to know when misinformation is circulating and how far it’s spread, or to trace its origin. With journalists and researchers powerless to build data on the spread of fake news on its platform, there will be less leverage to hold Facebook accountable.

“With the resources we have right now, it would be almost impossible to tackle misinformation inside this kind of platforms or even to track impact of our fact-checking activities,” Tai Nalon, the executive director of Aos Fatos, a Brazilian fact-checking organization, told me in an email.

“Closer networks condense more falsehoods as they help to hide their authors,” Nalon added. “Smaller and more closed groups turn the debate less plural.”

Tully thinks that, with the fake news out of sight, Facebook can claim they’re seeing less of it circulate on its platform, which would allow them to say, “Guess what advertisers, you can be happy and give us your money!”

Even more troubling: If Americans follow Facebook’s shift toward peer-to-peer communication, we may find ourselves facing disastrous outcomes such as those seen in India and Myanmar, where private messaging apps helped the spread of hate speech and false news — contributing to violent and deadly conflicts.

“We only learn about problems when they have some consequences, like people killing people,” said Harsh Taneja, a University of Illinois professor who studies audience behavior.

Facebook has been pounded over the dissemination of false news articles, conspiracy theories and hoaxes over the past few years. The scrutiny largely took off in 2016, when journalists flagged how far fake political news had spread on the platform, and documented what appeared to be fraudulent accounts seeking to rile up partisans. Revelations throughout 2018 that Facebook exposed personal data of millions of users added to widespread alarm about the company’s practices. Lawmakers and government officials around the world have since suggested regulating the social media giant due to several concerns, including both the dissemination of misinformation and how it handles personal data.

Zuckerberg didn’t unveil a specific product last week. Instead, he simply stated in his blog post that it would be something like WhatsApp, an end-to-end encrypted messaging app owned by Facebook. That’s precisely why several experts I spoke with were concerned about the Facebook CEO’s announcement. As Taneja put it: “They will then be able to begin to use the line of defense they have been using for [fake news on] WhatsApp; this is peer-to-peer, this is private, so we cannot do anything about it.”

WhatsApp has reportedly been deleting two million accounts each month to stop misinformation on its platform in India, after the government there repeatedly condemned the app for being an “abettor” in malicious hoaxes that fueled a series of mob lynchings. In an interview last week with Wired, Zuckerberg said Facebook would consult with experts over the next year to get the “detail and nuances of these safety systems right” before introducing a new product focused on encrypted communication. He also acknowledged there could still be problems in the end, though he never mentioned fake news or disinformation.

“Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things,” Zuckerberg wrote. “When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion. … We are working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity or through other means, even when we can’t see the content of the messages, and we will continue to invest in this work. But we face an inherent tradeoff because we will never find all of the potential harm we do today when our security systems can see the messages themselves.”

Some believe it’s possible for Facebook to spot and stop the spread of fake news without breaking encryption.

According to Kimon Drakopoulos, a data sciences professor at the University of Southern California, Facebook could trace the origin of fake news without reading users’ private messages. If a fake news article starts to take off on Facebook publicly, the company could identify the first users to post it, and then look at which users they interacted with leading up to that post. “Once they see a pattern of connections, they can easily identify the instigators,” Drakopoulos said.

Writing for Columbia Journalism Review, Taneja and his colleague, Himanshu Gupta, argued that WhatsApp could create a business account for itself that would serve as a “Fake news moderator.” Users could easily “report” any suspect fake news forwards, images or videos that they’ve received through a forward to the moderating account. Under this plan, WhatsApp could then filter for these files or text and block them from being forwarded any further.

But Shyam Sundar, co-director of the Media Effects Research Laboratory at Penn State University, argued that some onus needs to be put on news consumers as well.

“They will need to realize that while their friend or relative may mean well when they forward news and public-affairs information, but they are not trained in journalistic craft to be able to do so competently,” Sundar told me in an email. “When people encounter news on their social networks, they need to pause and wonder if the person who disseminated the information knows enough to vet facts and verify them by double-checking with independent sources.

There are far fewer studies looking at how information spreads on messaging apps, noted Emily Vraga, a political communications professor at George Mason University, so it’s unclear how Americans would behave on a more private version of Facebook.

It’s just as plausible that people may feel more comfortable pointing out incorrect information in a private communication, as it is that users would be more inclined to believe fake news if it’s shared by someone they have a personal connection with because they trust them. An increase in communication through private messaging could also amplify the echo chamber effect, by making it less likely that someone is exposed to viewpoints different than theirs.

“Now there’s a chance we’re just going to be circulating this content within a closed group of people,” Tully said. “That is a breeding ground of disinformation spreading and taking hold.”

Tyler Kingkade, formerly of BuzzFeed News, is a freelance journalist living in Brooklyn. Follow him on Twitter @tylerkingkade

Production DetailsV. 1.1.0
Last edited: March 13, 2019
Author: Tyler Kingkade
Editors: Alexander Zaitchik, Jeff Koyen
Artwork: Photo by Glen Carrie on Unsplash

--

--

Pressland Editors
News-to-Table

Mapping the global media supply chain in the public interest.