‘False Flag’ Conspiracy Theories in the Case of Mailed Explosives and the Legitimization of the ‘QAnon’ Network
When more than a dozen explosive devices were mailed to influential political figures in late October, social media platforms like Twitter lit up with the fabricated “false flag” story that the bombs were sent by liberals to frame conservatives.
In general terms, a “false flag” tactic describes a deceptive strategy in which a group disguises itself with the intent of framing its actions on an opposing group. In this situation, many conservative-minded users asserted that left-leaning actors had sent the mailed explosives “in order to paint conservatives as violent radicals ahead of the [midterm] elections.”
Even with the arrest of a suspect in Florida five days after the first bomb was found—providing what seemed to be a neat and decisive conclusion to the crimes—Twitter experienced no such reprieve. The tweets and retweets kept coming as more misleading speculation or wholly inaccurate information continued to be posted.
So what was the origin of this particular piece of disinformation? In order to trace it back, we started with its promulgation by prominent users. We see how influential political figures—from political pundits to the president himself—bolster and legitimize the disinformation pushed out by conspiracy theorists, such as the Twitter-dwelling “QAnon.”
President Trump himself contributed to the false flag discussion via Twitter on the morning of Friday Oct. 26, suggesting a link between the mailed explosives and the midterm elections. The president even went so far as to frame the word “bomb” with quotation marks to convey doubt about the explosives’ legitimacy:
In a telling link between Twitter-verified political leaders and the more anonymous, dubious corners of social media platforms, Donald Trump, Jr. liked the following Oct. 25 tweet by USA NEWS (@USANEWS007—an account whose biography reads, “Donald Trump has been the greatest President we’ve had in 100 years! Media Lies! Twitter censors us. #DrainTheSwamp #BuildTheWall #ConfirmKavanaughNow):
Some “verified” users on Twitter were willing to put the false flag theory into their own words. Bill Mitchell (@mitchellvii), who has nearly 400,000 followers, tweeted on the morning of the first mailed explosive (to George Soros):
Ann Coulter contributed a similar message on Oct. 24:
Many such prominent users support the speculation by retweeting or sharing links/images or otherwise using only vague language, inviting the reader to reach the implicit accusation on their own. Accounts not tied to a particular individual or name, however, seem to more freely post explicit accusations or conclusions expressed with certainty.
The “evidence” employed by these less reputable users is wide-ranging. It includes screenshots of sites linking the arrested suspect to the Democratic Party:
The “evidence” further includes images of the packages that purport to bear evidence that they were never mailed to begin with. (Note the following four tweets were posted by user “Q” (@WeAreOne_Q), one of many Twitter accounts referencing the “QAnon” network. We’ll explore the significance of “QAnon” below.)
And images of the bumper stickers on the suspect’s van were used to contend that it too was faked:
Although we cannot determine which tweet(s) in particular constitute the false flag theory’s origin, we can ascertain the first collection of tweets containing the term itself, posted on Oct. 23:
An account called “Q” (@WeAreOne_Q) is especially noteworthy among the posts promulgating the false flag theory, as it provides a window into the vast network of “QAnon” on Twitter. This particular user posted threads referring to other recent events that have generated similarly vast amounts of social media disinformation.
Such tweets point to the role that the “QAnon” network plays as a generator of disinformation. Its followers contend that an anonymous informant—“Q”—exists and has access to “deep state” secrets.
Many tweets by users like “Q” seem to assert correlation between various hot-button issues in US politics:
The New York Times published an informative outline of this “QAnon” conspiracy movement, which seems closely tied to the anti-elite brand of pro-Trump conspiracy theorizing that is disseminated by such figures as Alex Jones (InfoWars). To get a sense of other users within the “QAnon” network, see this particularly lengthy thread hosted by WWG1WGA (@findtruthQ). For users generating content within this influential network (or following and retweeting such content), the mainstream media’s coverage of mailed explosives was just the latest body of evidence reinforcing their extreme views. Such theories, the authors of that New York Times column write, can be “wildly at odds with reality.”
A simple Twitter search for users related to “qanon” produces a massive amount of accounts, including both Trump’s personal and official accounts.
The appearance of the president’s accounts among those results points to the link between reality (insofar as such a term continues to be well-defined or even useful, which is up for debate given today’s political discourse) and its counterpart on social media. As the authors of the New York Times article about QAnon write:
“The paranoid worldview has crossed over from the internet into the real world several times in recent months. On more than one occasion, people believed to be followers of QAnon have shown up — sometimes with weapons — in places that the character told them were somehow connected to anti-Trump conspiracies.”
The language employed by users within this cross-platform network is exemplified in the YouTube search above. Such speech resembles rhetoric used in populist, “anti-globalist” or “anti-elite” movements bolstered by figures like Steve Bannon. Use of terminology related to “red pills” (e.g. user “James Red Pills America” above, a reference to The Matrix) and similar popular culture terms seems to align with the theme of “awakening” from a false reality.
As the false flag story shows us, tweets aren’t posted in an online vacuum. People act and absorb certain narratives based on what they read and watch. So when prominent, “verified” users take advantage of their virtual platforms to prop up and legitimize such extreme conspiracy-theorizing, there are consequences.
So it is crucial that members of the public critically evaluate cases like this false flag narrative, especially given its timing relative to the midterm elections. By assessing a sample of the disinformation generated in the wake of these mailed explosives last month—and tracking the pathway of such stories to the mainstream—our Disinformation Team hopes to contribute to a more informed and media-skeptical electorate.
Be on the lookout for more investigations by the Human Rights Center’s Disinformation Team to help combat the disinformation surrounding political discourse and human rights.
The Human Rights Investigations Lab is a part of the Human Rights Center, UC Berkeley School of Law. To learn more about HRC or the Lab, click here.