Disinformation Kills. Here’s What A National Strategy to Fight Back Might Look Like
(This first appeared in my free newsletter, https://MarcAmbinder.substack.com/ about digital security and countering disinformation.)
A few days ago, thousands of Americans received text messages from “a very reliable intel” source warning of a nationwide quarantine, of a mandatory two-week, military-enforced lockdown of the entire nation. The “intel” was false, but the details, and the atmosphere, convinced a lot of people that there was an underlying truth to the message, even though it was anonymous, came from an anonymous number, and was shared without warning. Indeed, we have seen several states and localities enforce quarantines and, tracking with the message, many of us have been asked to stay in our homes for two weeks. What’s dangerous about a fake message like that is not so much that it gives people false information. What’s dangerous is that spoof messages feel true to many people because it appropriately matches their level of fear and confusion. Effective disinformation performs two functions well. It paralyzes civic action. And it traps people into sort of a cognitive box; they start relying on sources who seem to most closely mimic their own fears, rather than sources who have access to the best obtainable truth. Disinformation can be deadly. In Iran, more than two dozen people died because they were told that downing alcohol could kill the virus. The percentage of Republicans who see a serious threat from the pandemic is less than 50 percent; Fox News’s willful gaslighting might come at the expense of compliance with social distancing, which, I don’t say lightly, could kill people.
In the case of the misleading text messages, =the National Security Council’s Twitter feed offered a denial about the text messages. And that was it. The horse had galloped away from the barn.
We have no national emergency strategy to combat misinformation and disinformation. We have outsourced this vital task to our communications platforms, which are now working together to triage dangerous and harmful memes, facts, quotes, threads, and video. Facebook, just today, suffered a partial outage because some of its automatic (read AI) filtering mechanisms went awry and began to censor or delete posts that didn’t violate its policy. The telecommunication companies, even though they’re an essential link in the chain, aren’t part of that effort. Until 2016, it wouldn’t have occurred to anyone that we might need a whole-of-America strategy. Since then, the National Security Agency and the Critical Infrastructure Security Agency have been focused on defending the country against external attack.
But the FCC, FTC, industry groups — really, anyone who regulates or observes as part of their mission how information spreads within the United States — have been left to their own devices. A lot of mis-disinformation starts here and spreads here. And it starts on TV or maybe in a tweet and spreads organically through texts between friends or in WhatsApp groups and emerges on Reddit, where it metastasizes and then suddenly it’s being shared in closed and open Facebook groups, and by that point — to borrow the language of today — there is no containment. Where did this message come from? It’s easy to acquire a list of cell phone numbers, launder your real number through a fake one, and bombard the recipients with SMS messages. (Campaigns do this all the time, albeit legally.).
What would a whole of America national counter-disinformation strategy look like?
First, here’s what it must avoid. It should not focus on specific content, or proscribe punitive measures, or threaten regulation, or be put in a position where bureaucrats would have to make judgment calls about the relative harm of a particular speech act. It should not focus on what Stanley Fish has called “the tug-of-war between balance and principle.” A counter-disinformation strategy would NOT regulate lies and disinformation. It would follow the example of Taiwan: completely transparency, “tolerant of differences and dissent, democratic and free.”
The mission statement should instead focus on building a nationwide capacity to counter-disinformation by targeting its spread, by providing the mechanism to. interrupt the network effects that allow it to zap from platform to platform so rapidly, by rebuilding a shared sense of truth around a select set of issues that are deemed critical to democracy and to the smooth functions of government. I would choose three subjects: the integrity of elections, public health, and national security emergencies. This is the only way to reach into of the most efficient vectors for the spread of misinformation
For pandemics, Ron Klain, Joe Biden’s former chief of staff and Ebola czar, proposed creating a Public Health Emergency Management agency, which would marry logistics (which FEMA does well) with health expertise (which CDC does well.). But what about communication? During the Ebola crisis, the National Security Council coordinated “messaging” among government agencies. But “messaging” is a small part of a counter-disinformation strategy. When Ebola hit in 2013 and 2014, the disinformation architecture that Russia built (and which sophisticated companies, brands and politicians now emulate), existed in clapboard form.
Right now, the State Department’s Global Engagement Center tries to counter foreign influence campaigns. The U.S. government decided to force Chinese journalists to register as agents of the Chinese government. The U.S. Cyber Command has adopted a “name and shame” policy towards countries (or their proxies) who try to break into critical U.S. systems. All of these efforts face outwards.
But the threat comes from within. And it rests on several deficiencies in our democratic life. One is truth decay, the sense that we cannot agree on even basic facts. Another is polarization. A third is the lack of media literacy. A fourth is journalism; journalists amplify misinformation with the pure motive of trying to beat it back. A fifth is that the barrier of entry for anyone to create something that looks real and then push it out, where network effects take over and human cognitive biases ensure that it is spread.
A national counter-disinformation strategy would start with the premise that no one wants to be fooled, and no one wants to be responsible for spreading harmful misinformation. That would be, I suggest, the message from the top: the message from the figurehead (or the President) who is on board with the strategy.
It would treat the platforms as public utilities only insofar as they convey information about the three subjects that are critical: public health emergencies, national security emergencies, and voting integrity.
It would gather the best political and social science about countering misinformation into one place and offer grants for further research.
It would allow the private sector to train thousands of ordinary individuals on open source digital forensics tools, and deploy them, like a special forces detachment, to train election officials, campaign officials, companies, small businesses, and others, on how what they can do.
It would encourage the platforms to create an open-access repository of accounts and claims that fall into the category of harmful misinformation; it would encourage platforms to share, as quickly as possible, evidence of coordinated inauthentic activity.
It would fund and encourage start-ups who work on flagging and tracking disinformation campaigns. It would provide significant tax incentives for those who started news companies in news deserts at a local level.
It would provide money to state and local officials to boost their communications budget in order to develop in-house resources to fight against malicious information locally.
It would develop and implement national crisis communication plans and serve as the focal point and coordination center for informatics during future disasters.
Just to start with. It would grapple with hard problems. For example: how do you — can you — stop harmful disinformation in private WhatsApp groups before it proliferates?
Disinformation kills human beings. We can flatten thiscurve by interrupting the spread, but we can’t do that without cooperation — the type of cooperation that is being put together, ad hoc, to mitigate the effects of the coronavirus pandemic