How One Doctor’s False Claim Was Used To Erase Atrocities In Syria

On social media, lies spread faster—and further—than the truth

Caroline Orr, Ph.D
Arc Digital
20 min readMay 28, 2018

--

“A lie can travel halfway around the world while the truth is putting on its shoes.”

Claiming to be a cardiologist, Twitter user @Thomas_Binder posted a tweet in the aftermath of the chemical attack in Syria last month accusing medical workers of faking a photo in which victims of the attack were pictured receiving life-saving care. Binder later admitted that the information in his tweet was wrong, but by the time he did so, the false claim had already been retweeted over ten thousand times and used to propagate a smear campaign against the volunteer rescue group known as the White Helmets.

The tweet was never taken down and has since made its way onto other websites and social media platforms, where it is being used as “proof” that the chemical attack was a hoax or a “false flag.” Meanwhile, the correction, which was posted two days later, has barely been noticed, garnering just over 40 retweets since it went up on April 15.

The virality of Binder’s tweet provides important insight into the human factors involved in the diffusion of misinformation (this refers to incorrect information, without assigning intent on the part of those spreading it, unlike “disinformation,” which does imply intentional deception), showing how cognitive biases, ideological motives, social and cultural norms, and characteristics of the misinformation itself interact to fuel a vicious feedback loop. With so many headlines focused on automated accounts (“bots”), online advertisements, and algorithm manipulation, it’s easy to overlook the fact that the problem we are dealing with is, at its core, a human problem.

While digital fixes and platform-specific solutions are important, if we really want to understand our so-called “fake news” problem, it’s necessary to also understand how human tendencies contribute to and perpetuate the cycle.

The Cycle of Misinformation

To fully grasp the reach of misinformation, multiple channels and mechanisms of communication need to be considered and analyzed. While retweets provide an initial indicator, there are other channels through which misinformation travels on Twitter and spills over onto other platforms.

In this specific example, Binder’s initial tweet was spread through retweets and amplified by quote-tweets, screengrabs of the original tweet, and tweets that translated Binder’s claim into other languages. Many of the tweets that spread through these secondary channels garnered hundreds or thousands of unique retweets.

The tweet was further amplified when it spread to other platforms, including Facebook, Reddit, several “news” websites, and numerous personal forums, message boards, and blogs.

The misinformation in Binder’s tweet spread through multiple channels (R) and across multiple platforms (L), ultimately reaching far beyond the initial 12,569 retweets.

To get a better idea of why this particular false claim went viral, how and where it spread, and what role it played in the ongoing disinformation campaign surrounding the chemical attack in Syria, I explored the lifecycle of Binder’s tweet, starting with the origin and moving on to the channels and platforms through which it spread. I also tracked the spread of the “correction” and other attempts to fact-check the original tweet. Finally, I looked at the Twitter accounts that actively spread and engaged with Binder’s tweet, focusing on how users evaluated the misinformation, assigned legitimacy to it, and used it to bolster existing narratives.

Part 1: Origins

In a tweet posted on Friday, April 13, @Thomas_Binder commented on a photo of children being treated in the aftermath of the April 7 chemical attack in Douma, writing: “As a cardiologist I can say that these ECG electrodes are completely wrong positioned. They would not get any signal. This picture is faked!”

Binder’s tweet is still being shared on Twitter, days after he admitted that the information he posted was incorrect. (Archived tweet).

Without missing a beat, Binder jumped to the conclusion that the incident was staged by the White Helmets — a narrative that had already gained traction thanks to an ongoing and aggressive disinformation campaign targeting the humanitarian group (also known as the Syria Civil Defense).

Binder continued to amplify his tweet, using his false claim to perpetuate misinformation about the White Helmets. (Archived link).

Two days later, Binder returned to the thread to admit that he was wrong, saying he “saw only the electrodes in the center” and acknowledging that “they are placed quite correctly.” Still, he continued pushing his initial false narrative smearing the White Helmets, even as he conceded that his claims were false and didn’t support his conclusion.

Two days after his initial post, Bartlett returned to the thread and admitted that he was wrong—but by that time, his false accusations had already gone viral. (Archived tweet).

By the time Binder posted his “apology,” the original tweet had already gone viral. By April 15 — the day Binder admitted the information was false — his initial post had been retweeted 7,898 times (archived link). Over the next four days, it was retweeted nearly 5,000 more times, reaching 12,569 retweets as of the morning of April 19 (archived link).

Meanwhile, Binder’s follow-up tweet acknowledging that he was wrong went largely unnoticed, garnering only 38 retweets in the first three days after it was posted, and 43 retweets by the fourth day.

Part 2: Channels of Misinformation

Binder’s original tweet spread quickly across Twitter, not only through retweets, but also through quote-tweets, screenshots, and translations into several different languages. I did not attempt to do a full inventory of every method through which the tweet was amplified, but in the sample I viewed, the original tweet was shared thousands of additional times (beyond the 12,569 retweets).

Many users added their own commentary when sharing the tweet. Sometimes, the commentary was meant to draw attention to and/or corroborate the (false) claims. In other instances, the commentary expanded upon Binder’s false conclusion, with some users accusing the children in the photo of being “crisis actors” and other users taking the opportunity to falsely smear the White Helmets.

Importantly, research shows that seeing the same or similar message from different sources boosts its credibility and makes people more likely to believe the message, especially if the sources are familiar to the recipient. People tend to assume that information from multiple sources is grounded in different perspectives and is thus worth reading and taking seriously. Credibility is also influenced by norms and other social processes, such that people are more likely to perceive a source as credible if they see that others (especially others within the same social network) perceive the source as credible.

In this case, other users boosted the credibility of the falsehood by picking up Binder’s claim and spreading it independently, thus making it more likely that others would believe the claim and pass it along themselves.

Many users quoted Binder’s original tweet and added commentary of their own, such as this user, who accused the children in the photo of being “crisis actors depict[ing] victims of chemical attack.” (Archived tweet).
This user posted a screengrab from another website and built upon Binder’s false claims, alleging (with no evidence) that the photo was staged by Al Qaeda and accusing the White Helmets of being “criminals.” He also added a “2nd opinion” from an anonymous “Dr. Daniel,” apparently in an attempt to lend credibility to Binder’s false claim. (Archived tweet).
This user posted an image from a textbook in an apparent attempt to boost the credibility of Binder’s false claim. Importantly, the textbook image appears to depict the chest of an adult, not a child. (Archived tweet).
Most users who shared Binder’s tweet emphasized the fact that the (mis)information was coming from a cardiologist, in what appears to be a classic example of an appeal to authority.

The tweet was further amplified when other users translated it into different languages, garnering thousands of additional retweets and likes.

A translation of the original tweet garnered over 1,200 retweets and more than 1,100 likes as of April 17. (Archived tweet.)

Another user partially translated the tweet and appeared to add his own “corroboration” of the false claim, in a post that was retweeted almost 2,300 times.

The original tweet was also translated into Japanese, garnering thousands of additional retweets and likes. (Archived tweet).

The tweet was also translated into Turkish and French, further expanding its reach into entirely new audiences.

A Turkish translation of the original tweet. (Archived tweet).
And a French translation. (Archived tweet).

As seen in the tweets below, the original (false) claim was not only used to deny that a chemical attack had taken place, but also to denigrate the White Helmets and even to suggest that the children in the picture were being abused. Many users also boosted the credibility of the tweet—and its erroneous conclusions—by emphasizing that the claim came from a cardiologist (it’s not clear if Binder is really a cardiologist, but people readily took his word for it). Research shows that source characteristics influence information processing, such that the appearance of expertise makes people more likely to accept information without much thought or critical appraisal.

In this quote-tweet, the user draws attention to Binder’s false claims, urging others not to miss this “extremely important revelation from a cardiologist.” He also seems to suggest that something nefarious was taking place, though he posed the accusation in the form of a question. (Archived tweet).
Binder’s initial tweet gained additional traction as others picked up his false claim and used it to craft a narrative assigning nefarious intent to the White Helmets. Common themes included accusations that members of the White Helmets are terrorists and/or that the children in the photo were being abused by the White Helmets.

Twitter wasn’t the only channel through which Binder’s false claims spread. Within just a few days, the tweet had already made it onto several different personal blogs, forums, and message boards, as well as numerous websites and social media platforms including Reddit and Facebook.

Cross-posting and reposting misinformation in this way allows false claims to persist, even if the original post is taken down. While much of the ongoing discussion surrounding “fake news” is centered on actions taken (or not taken) by social media companies like Twitter and Facebook, the spread of misinformation to secondary platforms —and the public’s indirect exposure to such content—shows that this problem cannot be attributed to a specific platform, nor can it be adequately addressed with platform-specific solutions.

The spread of misinformation from Twitter to secondary platforms including Reddit (middle row, left column), Facebook (bottom row, middle column), and a variety of websites, message boards, and personal blogs highlights the challenge of combatting so-called “fake news.”

Part 3: The Virality of Misinformation

The veracity of Binder’s claim was an afterthought for those looking to support the narrative that the chemical attack was an elaborate hoax, and/or to undermine the rescue workers who witnessed the attack and provided care to the victims. His tweet told those people what they wanted to hear, so it was accepted as true almost immediately.

As noted above, Binder’s “correction” got very little attention, with only 43 retweets after four days. Comparing the follow-up to the initial tweet, total retweets for the correction added up to just 0.3 percent of the total number of retweets for the initial false claim—and that’s not accounting for the thousands of times the misinformation was retweeted via screenshots, quote-tweets, and translated tweets.

When Binder did issue a “correction,” he used the opportunity to take another swing at the White Helmets instead of stating clearly that the information he posted was wrong. His muddled “correction” was so difficult to interpret that some people couldn’t even tell whether or not he was admitting an error.

Clearly, Binder’s muddled non-apology didn’t register with some Twitter users.

Importantly, some users did try to correct the record on their own. These fact-checks successfully pressured Binder to issue his eventual “correction.” However, the impact of the correction was limited. Even the most widely-shared fact-checking tweets only reached a tiny fraction of the audience that the falsehood reached, and the correction did not appear to slow the spread of the initial false tweet.

Furthermore, while the falsehood was amplified by users who translated it into multiple languages, there were no similar attempts to translate the correction or fact-checks into different languages. Therefore, most users who were exposed to a translated version of the initial (false) tweet likely never saw a correction.

Fact-checking from other users was effective in pressuring Binger to issue a correction, but it had minimal impact on other users and did not stop the spread of misinformation.

Even when Binder was confronted about his egregious error, he refused to take responsibility for spreading such inflammatory misinformation. Instead, he doubled-down on his false conclusion and presented other (dubious) “evidence” to support the narrative he wanted to tell. He also refused to delete his initial tweet — a decision he justified by claiming that even if his evidence was wrong, the conclusion was still correct.

Even after acknowledging that he was wrong, Binder continued to defend his original, false conclusion when challenged by other users.
When pressed, Binder still refused to delete his initial incorrect tweet.

These examples reflect a well-documented phenomenon whereby misinformation spreads faster and farther than the truth, in large part because people choose to share information that confirms pre-existing beliefs and tune out information that challenges those beliefs. This ends up having a domino effect, as people within a social network are more likely to be exposed to the same false information and—just as importantly—not exposed to contradictory information. This is because the same motivational factors that drive people to believe and share misinformation also influence their response to follow-up information correcting the initial falsehood.

The virality of false news was highlighted in a recent study by researchers at MIT, who found that falsehoods “diffused significantly farther, faster, deeper, and more broadly than the truth” across all categories of news stories.

The study, which examined 126,000 stories shared some 4.5 million times by 3 million Twitter users from 2006 to 2017, found that false news was 70 percent more likely to be retweeted than the truth. Whereas the truth rarely reached more than 1,000 Twitter users, the most pernicious false news stories routinely reached well over 10,000 people. Furthermore, tweets containing falsehoods reached 1,500 people on Twitter six times faster than truthful tweets.

While the problem was ubiquitous, it was found to be most severe for political news. False politics stories were “more viral than any other category of false information,” reaching 20,000 Twitter users three times faster than the other types of information reached 10,000 people.

These findings are in line with an expansive body of research showing that misinformation has a lingering effect even after it has been refuted, a concept described as the continued influence effect. People believe misinformation for a variety of reasons — and it’s not usually because they are just lacking knowledge or correct information. As such, removing the influence of misinformation in people’s minds is not as simple as just offering additional information, and stopping the spread of falsehoods on social media is not as simple as posting a correction.

In some cases, attempts to correct misinformation can actually solidify the falsehood in people’s minds even further, especially when the new (correct) information runs counter to a person’s worldview. Whereas confirmation bias leads people to seek out and more readily believe information that is consistent with pre-existing beliefs, worldviews, and narratives, disconfirmation bias motivates people to spend more time and thought arguing against opposing information. That process of bringing supposedly supporting evidence to mind while ignoring any contrary evidence often ends up strengthening a person’s belief in the original, false information—a phenomenon that can be seen in Binder’s own follow-up tweets referencing other (mis)information to support his conclusions about the chemical attack being a false flag and the White Helmets being a group of nefarious actors.

Studies indicate that publishing a retraction can reduce the number of references to a piece of misinformation, but typically does not come close to stopping the spread of misinformation or eliminating its influence. These findings may be even more applicable to cases like this, given that the falsehood was posted on numerous platforms but the correction was only posted on Twitter. As such, people who were exposed to the falsehood on a website or forum likely remained in the dark about the fact that the information was false.

There is also an important social element involved in the uptake and spread of misinformation. Most of those who retweeted the initial post left it on their Twitter feeds long after Binder issued his “correction,” thereby spreading it to more people and continuing the cycle of misinformation. Several prominent supporters of Syrian President Bashar al-Assad left the falsehood on their Twitter feeds, which not only exposed additional people to the misinformation in Binder’s tweet, but also gave it credibility due to their public profiles.

For example, Vanessa Beeley, one of the most vocal pro-Assad voices on social media, still had the retweet on her Twitter feed at least a week after Binder admitted the information in the tweet was false. By that time, Beeley had also retweeted Binder’s “correction,” but didn’t undo the retweet of a post that she knew was incorrect.

Binder’s tweet was still on the Twitter feed of Vanessa Beeley, even after she retweeted his sort-of correction, which greatly increased exposure to the falsehood due to Beeley’s central role in pro-Assad Twitter networks. (Archived link)

Beeley, along with a few other prominent pro-Assad Twitter users, appear to have had an outsized influence on the reach of Binder’s tweet. Looking at Binder’s other posts, most were shared by just a handful of people and failed to gain any traction. But by tagging Beeley and fellow pro-Assad propagandist Eva Bartlett in his initial tweet, Binder managed to catch their attention with a message that propagated a narrative they already wanted to tell, prompting them to pass along the misinformation to their followers. Though Beeley describes herself as a journalist, she apparently did not attempt to verify Binder’s tweet before sharing it with her network of nearly 33,000 people. (The Guardian describes Beeley as a “blogger … who visited Syria for the first time in July 2016” and Bartlett as “a Canadian writer and activist who said the White Helmets staged rescues using recycled victims — a claim that’s been debunked by Snopes and Channel 4 News.”)

The network visualization below shows the diffusion of misinformation (“fake news” articles) about the White Helmets on Twitter. The graph is based on 10 articles about the White Helmets, published in April/May 2018, that were deemed false by the fact-checking website Hoaxy. As you can see, Beeley is by far the largest node on the graph, indicating her status as a hub for misinformation about the rescue group—even exceeding the influence of propaganda outlets masquerading as news websites, such as 21st Century Wire.

Hoaxy network graph showing the diffusion of 10 false news articles about the White Helmets in April and May 2018.

An enlarged image of the same graph shows Thomas Binder’s position in the network. The size of his node indicates that he is much less influential as a source of misinformation about the White Helmets, likely because he has a much smaller Twitter audience and has not established himself as a source for pro-regime propaganda. However, because his tweet was amplified by Beeley, it reached far beyond his immediate (small) network and made its way to an international audience.

An enlarged subsection of the Hoaxy network graph showing the diffusion of 10 false news articles about the White Helmets in April and May 2018.

Notably, the website Veteran’s Today appears in a relatively prominent position on the network graph. As I have written about previously, Veteran’s Today is one of many proxy websites used by the Kremlin to deliver propaganda to American audiences under the auspices of a “news website.”

An enlarged subsection of the Hoaxy network graph showing the diffusion of 10 false news articles about the White Helmets in April and May 2018.

Importantly, the network graph shows that most of the accounts spreading misinformation about the White Helmets do not resemble bots. While bot-like behavior is certainly apparent (as depicted by the pink and red nodes in the network graph), it’s clear that humans are driving the spread of false news articles, with several key accounts functioning as misinformation hubs. This is consistent with the results of the study referenced above, which found that bots spread false news at the same rate as true news, suggesting that “false news spreads more than the truth because humans, not robots, are more likely to spread it.”

Part 4: White Helmets and Grey Propaganda

Binder’s tweet came just after a series of photos emerged, purporting to show the White Helmets in the act of staging a chemical attack in Syria. The photos were actually taken from a film set, as investigators at Bellingcat clearly identified.

Even after the origin of the photos was revealed by Bellingcat, pro-Assad social media users and Russian media sites (including Russian state media channel Russia 1) continued to use them as “proof” that the April 7 chemical attack in the opposition-held town of Douma was fake.

The White Helmets are regularly targeted by these types of disinformation campaigns, most of which originate from Russian sources and/or regime-backed operations in Syria. Because of their location on the ground in besieged locations in Syria, the White Helmets are in a position to expose the atrocities that the Assad regime tries so hard to conceal. Footage captured by the group’s helmet cameras has been used by organizations like Amnesty International to corroborate firsthand accounts of war crimes, including the targeting of civilians and medical facilities by the Syrian regime. In order to discredit this evidence, regime backers rely on smear campaigns aimed at discrediting the people delivering it. As the Guardian describes:

The campaign to discredit the White Helmets started at the same time as Russia staged a military intervention in Syria in September 2015, supporting President Bashar al-Assad’s army with airstrikes bombarding opposition-held areas. Almost immediately, Russian state media such as RT and Sputnik started falsely claiming that ISIS was the only target and throwing doubt on the bombings of infrastructure and civilian sites.

The same propaganda machine scooped up fringe anti-American activists, bloggers and researchers who believe the White Helmets are terrorists, giving them a platform on state TV and amplifying their articles through social media.

Conspiracy theories and false claims about the White Helmets have become so common and spread so widely that they are often uncritically accepted as fact by American and European audiences. As I’ve written about previously, pro-Trump social media users and media outlets like Infowars and Fox News have played a major role in bringing Russian- and Syrian-backed conspiracy theories to the U.S., whether wittingly or unwittingly. Since the information is being delivered by familiar sources, American audiences are more receptive to the claims and more likely to believe them without critically evaluating the information.

The disinformation targeting the White Helmets is particularly insidious because it often falls into the category of grey propaganda — a type of propaganda that conceals its origins and fails to disclose the identity of the person or entity that sponsored it. Grey propaganda is often state-funded and then disseminated by a network of collaborators presented as independent journalists, academics, news outlets, activists, or just concerned citizens. By obscuring any connection to Russian or Syrian government operations, these sources are framed as objective arbiters of truth, which in turn boosts their credibility in the eyes of the public.

Certainly, not all of this disinformation and misinformation comes from government-funded sources. Many journalists on the ground in Syria are accompanied by regime handlers, who may limit their access to certain people and locations, thereby restricting their ability to report on anything the government doesn’t want them to see. Access journalism plays a role, as well. There is an incentive not to publish reports that are too critical of the Syrian regime, as most journalists who have done so find it hard to gain access for future reporting. Likewise, there is also an incentive to publish reports that portray the opposition in a negative light — for example, by accusing the White Helmets of being terrorists or kidnappers.

As Scott Lucas, professor of international politics at the University of Birmingham, told the Guardian, some of those involved in this disinformation campaign don’t know they’re being used as pawns.

“The most effective propaganda is when you find someone who believes it then give them support — you don’t create them from scratch,” Lucas explained.

And as this analysis shows, even a single tweet from a random source with a small Twitter audience can be put to use to further the narrative of this aggressive, international disinformation operation.

Conclusions

This incident provides important insight into the spread and uptake of misinformation, with practical implications for understanding the human factors involved in this cycle.

There are several characteristics of the initial false claim, as well those who spread it and the channels through which it was disseminated, that contributed to its virality. These characteristics, which are outlined below, mirror the hallmark features of effective Russian propaganda, as described by RAND’s “Firehose of Falsehood” propaganda model.

Key factors driving the spread of misinformation:

Variety of sources: The falsehood was spread independently by a variety of social media users as well as by propaganda outlets masquerading as news sites. Research shows that multiple sources are more persuasive than a single source, and receiving the same or similar message from multiple sources increases the likelihood that it will be accepted as true.

Number and volume of sources: As a function of the variety of sources, the misinformation in Binder’s tweet was also spread by a large number of sources. As RAND explains:

The experimental psychology literature suggests that, all other things being equal, messages received in greater volume and from more sources will be more persuasive. Quantity does indeed have a quality all its own. High volume can deliver other benefits that are relevant in the Russian propaganda context. First, high volume can consume the attention and other available bandwidth of potential audiences, drowning out competing messages. Second, high volume can overwhelm competing messages in a flood of disagreement. Third, multiple channels increase the chances that target audiences are exposed to the message. Fourth, receiving a message via multiple modes and from multiple sources increases the message’s perceived credibility, especially if a disseminating source is one with which an audience member identifies.

Repetition and Familiarity: The claim mirrored other propaganda targeting the White Helmets, including a recent smear campaign alleging photographic evidence that the White Helmets staged the chemical attack. Messages that align with an existing narrative are more appealing and readily accepted, even when they are false. According to RAND:

  • Repeated exposure to a statement has been shown to increase its acceptance as true.
  • The “illusory truth effect” is well documented, whereby people rate statements as more truthful, valid, and believable when they have encountered those statements previously than when they are new statements.
  • If an individual is already familiar with an argument or claim (has seen it before, for example), they process it less carefully, often failing to discriminate weak arguments from strong arguments

Perceived source credibility: By emphasizing that the claim came from a cardiologist, the source of the misinformation was assigned undue credibility. When secondary sources like Vanessa Beeley retweeted it, the claim was assigned additional credibility due to the social aspects of information processing. People tend to accept information as true (without critically evaluating it) when it comes from sources who are perceived as experts or authoritative voices, or who are perceived as similar (socially, politically, and culturally).

Peripheral cues: People also assign credibility to messages based on peripheral cues, such as the appearance of expertise or the format in which the information is presented. Importantly, Binder’s tweet was cross-posted on websites that had the appearance of news websites, which likely boosted perceptions of credibility. This is a core feature of the Russian propaganda machine, which is notorious for using proxy websites and outlets like RT and Sputnik, which look like newscasts but are conceptually indistinguishable from state-sponsored propaganda. As RAND describes, “A broadcast that looks like a news broadcast, even if it is actually a propaganda broadcast, may be accorded the same degree of credibility as an actual news broadcast.”

Emotional appeal: As evidenced by the responses to his tweet, Binder’s claim evoked an emotional response from those who viewed it. According to RAND, “Stories or accounts that create emotional arousal in the recipient (e.g., disgust, fear, happiness) are much more likely to be passed on, whether they are true or not.”

The mob effect: The cycle of misinformation is also perpetuated by aggressive efforts to delegitimize sources of contradictory factual evidence. For those looking to push a false narrative, bearers of truth are seen as opposition voices that must be discredited. In some cases, this involves swarming reporters, researchers, and activists on social media, spreading libelous claims to undermine their credibility, and even threatening them with violence.

Because of the high volume of misinformation involved in disinformation campaigns like the one targeting the White Helmets, it’s too time consuming to chase after every false claim (and may not be effective even if it could be done). Instead of trying to refute individual claims, one of the best ways to counter disinformation is to get ahead of it by exposing the propaganda for what it is.

As RAND recommends:

Don’t direct your flow of information directly back at the firehose of falsehood; instead, point your stream at whatever the firehose is aimed at, and try to push that audience in more productive directions.

Being aware of the human tendencies that motivate people to share misinformation can also help those seeking to spread accurate messages, as the same basic techniques that fuel the cycle of misinformation — a high volume and variety of sources, as well as use of multiple channels and platforms — can also be employed to increase the flow of accurate information, without even engaging with the falsehoods.

In other words, while we may not be able to stop the firehose, we can hand out raincoats and life preservers — and most importantly, we can teach people how to swim.

--

--

Caroline Orr, Ph.D
Arc Digital

Feminist. Behavioral Scientist. Freelancer. I study disinformation, psychological warfare, & the extremes of human behavior. Then I write about it for you.