Can We Trust Facebook to Keep Our “Digital Living Rooms” Safe From Liars, Racists, and Haters?
By Anamitra Deb, Beneficial Tech Lead, Omidyar Network
Yet again, Facebook wants to have its cake and eat it too. Last week, Mark Zuckerberg announced plans to build a new offering focused on private conversations. By combining the billions of users across WhatsApp, Messenger, and Instagram, this still-to-be-created product claims to be the future of communication: private, encrypted, secure.
He laid out several principles that would guide this pivot, including private interactions to ensure that communications both sent and received are directly under the user’s control, and encryption, which helps fulfill users’ desires that private communications are only seen by the people to whom they’re sent — not hackers, criminals, governments, or even the platforms themselves.
On the face of it, there seems to be much to like. Not least because many other offerings — from Signal to Telegram and, of course, WhatsApp — already incorporate many aspects of it. And following on from a failed acquisition attempt, it also incorporates Snapchat’s ephemerality, in that users’ digital footprints disappear over time.
But we’re skeptical. For us, this announcement raises far more questions than it answers. Among them:
- How does Facebook’s access to and use of data and metadata on this platform work?
- Can Facebook, with its disastrous track record of lying about privacy and user data, really be trusted on a set of privacy and security concerns it promises to address in the future?
- How will the business model of this new offering — which stacks services from phone calls to video chats, payments to e-services, on top of businesses — actually work? Is it basically an attempt to become some global WeChat-Snap combine, scaling the payments model they are seeking to pilot in India?
- By combining the three messaging platforms, which have a total user base of nearly four billion, is Facebook using privacy to legitimize dominant, monopolistic behavior?
- And finally — in the wake of a convenient realization about what users want — why were major reforms to Zuckerberg’s flagship product, itself the epitome of harmful surveillance capitalism and disrespect for users’ data, so notably missing from his announcement?
For what it’s worth, this isn’t even a comprehensive list of concerns. Zuckerberg himself admits that there are tradeoffs to the encrypted, private communications game, noting that there is the very real potential for “truly terrible things” such as child exploitation, terrorism, or extortion. He says, “we will never find all of the potential harm we do today when our security systems can see the messages themselves.” Given Facebook’s grievous historical track record on this point, even on the open platforms where they can see all the content, this should give us all serious pause. What will happen when no one, including the platform itself, can see the private messages that are sent and received?
“What will happen when no one, including the platform itself, can see private messages that are sent and received?”
The sad truth is that we don’t need to wait for the future to see how this might play out: this shift to private, encrypted communication has already taken place, at scale. The announcement glosses over a fundamental feature of the reality already pervasive in Indonesia and Brazil, in Nigeria and Myanmar, and many other countries: rampant misinformation (intended to manipulate the public, for example, on vaccines), organized disinformation (intended to sway critical elections in Nigeria, India, and Brazil), and the perpetuation of dangerous speech (and the incitement of violence) that has led to severe, offline consequences. A “digital living room,” to borrow Zuckerberg’s own phrase, that is filled with haters, liars, racists, and organized commissioners of violence should hardly be the social media vision accepted by us all. It’s far from the promised land; it’s a quasi-public space where covert propaganda and dangerous speech abounds, in the absence of any way to monitor it or do anything about it.
“A “digital living room,” filled with haters, liars, racists, and organized commissioners of violence should hardly be the social media vision accepted by us all.”
That this issue isn’t yet salient in the minds of the tech sector and domestic policymakers is only because the implications have mainly been limited to the global south. That will almost certainly change if more global communication becomes private and encrypted. A cynic might even say that Facebook, knowing what it knows about the problems WhatsApp has created in countries like Brazil, is engaging in masterly misdirection. It is washing its hands of the disinformation and dangerous speech problem on open platforms by shifting large amounts of user communications to private, encrypted messaging services where these problems are much harder to tackle, and social or technological consensus about which solutions to push are missing.
“At Omidyar Network, we believe that in order to keep our “digital living rooms” safe, we must get smart on these issues — and we must do so right now.”
At Omidyar Network, we believe that in order to keep our “digital living rooms” safe, we must get smart on these issues — and we must do so right now. We have a chance to make WhatsApp — and any future versions of Instagram and Messenger that enshrine end-to-end encrypted communication — safer for the world.
To do that, we need to see at least these three things:
1. How do users actually use WhatsApp?
We have precious little data beyond the total number of WhatsApp users per country. How many groups does the average user belong to? What content do they see, engage with frequently, and what do they do with it? How much of this is ill-intentioned, either because it is intentionally deceptive or hate-filled? If Facebook is serious about safe and valuable “digital living rooms”, it needs to make (meta) data sets transparent. It needs to welcome what civil society and researchers in countries such as Brazil, India, and Myanmar are learning about users’ behaviors and complaints, and get serious about the right mix of technological and governance solutions.
2. Who are the organized perpetrators of disinformation and dangerous speech?
What are their incentives, methods of content creation, and dissemination patterns, and how effective are their campaigns? What have technologists, researchers, and civil society groups learned about promising or effective ways of dealing with this — if any — given that the “open social” norms of content guidelines and company policing cannot work on the “closed social” apps?
3. What’s the framework for the balance between our need for encrypted, private, non-surveilled speech, and safety and protection from disinformation and dangerous speech?
Who should be at the table when working out these critical “rules of the road” and who will police the ensuing balance to ensure that we don’t come down too hard on one side?
Put simply: we need more information about these issues, and an acknowledgement of their scale and gravity. We need transparency and cooperation from the platforms about how private and encrypted spaces are already contributing to dangerous societal trends, and commitments of resources to smarter solutions. We need sustained public pressure from users and their communities.
“If Facebook doesn’t follow through on their claim to make our “digital living rooms” safe, their platforms will become more covert, unintelligible, and dangerous versions of the chaos the flagship platform is today.”
At Omidyar Network, we are funding work to help answer these questions, and we invite other funders, technologists, and researchers to collaborate with us. If Facebook doesn’t follow through on their claim to make our “digital living rooms” safe, their platforms will become more covert, unintelligible, and dangerous versions of the chaos the flagship platform is today. And that is not a future social internet that the world can afford.