The Disinformation War: Tactics, Targets, and Countermeasures

Dag
tomipioneers
Published in
12 min readOct 18, 2023

We’ve been having some interesting discussions in our Discord recently about the complexities of internet censorship and moderation — a topic that is central to our mission as we attempt to build an alternative internet. In shaping the tomi ecosystem, we find ourselves grappling with questions around the boundaries of expression, community safety, and, ultimately, the integrity of information. While our whitepaper has outlined preliminary guidelines on “banning gratuitous violence, illicit pornography, and other extreme behaviors” these are still evolving concepts that the community will need to further define collaboratively.

However, today I’d like to divert our focus slightly, but significantly, towards a phenomenon that is in direct contradiction to everything we envision for unbiased freedom on the tomiNET — a phenomenon known as disinformation — the deliberate act of spreading false information with the intent to deceive. As we wrestle with questions about what should and should not be moderated on our platforms, it’s crucial to also consider the pervasive and insidious threat of disinformation. While some members during our very spontaneous discussion argued for personal responsibility in discerning truth from falsehood, others advocate for a more intent-based moderation. Yet, we cannot ignore that disinformation campaigns often work by exploiting these very nuances.

In a time when the mainstream internet is filled with falsehoods that have real-world consequences, an alternative internet like the one tomi is building, must wrestle with these challenges head-on. To deepen our understanding of the issues we will inevitably face one day, this article will be the first in a series that explores disinformation’s and misinformation’s impact. I will begin by focusing on Russia, which has turned disinformation into a powerful machinery that manipulates global politics and even targets specific individuals…

So let’s journey together into the labyrinthine world of disinformation — its history, its explosion in the digital age, its manifestations in real-world events, and what it means for the future of our community.

The Historical Anatomy of Disinformation

Long before hashtags and trending topics dominated public discourse, state actors had already perfected the dark art of disinformation into a finely-tuned instrument of statecraft. The use of disinformation by governments is not merely a modern invention, but has a long and complex history, stretching from the Byzantine Empire to Cold War cat-and-mouse games. It’s a saga of cunning brilliance that offers lessons for any era. However, it is during the Cold War where disinformation campaigns reached unprecedented levels of sophistication. Both the United States and the Soviet Union engaged in what was termed ‘active measures,’ a portfolio of covert activities aimed at influencing events and public opinion internationally. The strange thing is, that we even see traces of some of them today.

The KGB and Operation INFEKTION

Among the most infamous disinformation campaigns was Operation INFEKTION, a KGB propaganda initiative that first came to light in the early 1980s. Its mission? To publicize the falsehood that the AIDS virus was a biological weapon created by the U.S. government to target African American communities. The KGB planted this “information” through a well-placed article in an Indian newspaper, which was subsequently picked up by news agencies around the world. This was disinformation at its most potent: a lie that played into existing fears and prejudices, while also creating new ones.

Despite being debunked by reputable sources, the Operation INFEKTION narrative has displayed a remarkable resilience, embedding itself not only in marginalized communities but also mainstream culture. This durability can be attributed not only to the effectiveness of the disinformation itself but also to the psychological predispositions that make us prone to believing such claims — our inherent biases, fears, and a tendency to confirm what we already believe.

One stark example is Kanye West, who used his influential status to propagate this false narrative during the 2005 Live 8 concert tour, an event intended to raise global awareness about poverty and AIDS. In front of a worldwide audience of 3 billion people, West shocked attendees by proclaiming that AIDS was a “man-made disease” placed in Africa in the same way that crack was allegedly distributed in black communities to dismantle the Black Panthers. He even reiterated this belief in the lyrics of his hit song “Heard ’Em Say,” stating, “And I know the government administered AIDS/ So I guess we just pray like the minister say.”

The insidious power of Operation INFEKTION lies not just in its original design but in its ability to find spokespeople like West who amplify its reach, thereby implanting the lie into the cultural psyche for generations. It serves as a chilling reminder of how a meticulously orchestrated disinformation campaign can not only disrupt geopolitical stability, but also embed itself into the cultural fabric of societies for generations.

Active Measures: Disinformation as Statecraft

But why do states indulge in disinformation? The objectives are multifaceted. At the surface level, it creates a climate of mistrust and confusion, eroding public faith in institutions and each other. In the context of the Cold War, this was a zero-sum game; eroding your opponent’s credibility effectively increased your own. It was not merely the dissemination of lies but the orchestration of a parallel reality where truth was relative, and facts were negotiable.

These tactics were not confined to foreign adversaries. Often, disinformation campaigns were directed inward, aiming to stifle dissent within ones own nation, to control the narrative, and to maintain a facade of unity and strength. It was a totalizing strategy that aimed at both the hearts and minds of people globally and domestically, using all available avenues — media, academia, culture — to reach its objectives.

The Tools of the Trade

During the Cold War, the tools available were relatively primitive by today’s standards: newspapers, radio broadcasts, and covert agents. Yet, these were utilized with a level of expertise that made them highly effective. Agents would plant stories in small, non-mainstream publications, knowing that larger media organizations would pick them up in the absence of sensational news. This “laundering” of disinformation gave the planted stories a semblance of credibility by the time they reached mainstream audiences.

Forgery was another tool in the disinformation arsenal. Forged documents were carefully produced to appear genuine and were then “discovered” and leaked to the media. Such fabrications would often involve allegations of corruption, human rights abuses, or secret conspiracies, further provoking mistrust and division.

As we dig deeper down this rabbit hole, it’s crucial to understand that disinformation is not a relic of the past. Its methodologies have evolved, adapting to the digital age in a form far more intricate and pervasive than anything imaginable during the Cold War era. If Operation INFEKTION was a masterstroke of Cold War deception, then the activities of the Russian Troll Factory are its modern-day equivalent — a digital manifestation of disinformation that has been fine-tuned for an era of social media and global interconnectedness.

The Russian Troll Factory: Masters of Digital Deception

Imagine a covert operation scaled to industrial proportions, run with the efficiency of an assembly line but designed for mass manipulation. That’s the Internet Research Agency, more colloquially known as the Russian Troll Factory. Situated in an innocuous building in St. Petersburg, this den of digital deceit is a 21st-century update on state-sponsored disinformation. Unlike their Cold War predecessors, these digital operatives need not infiltrate foreign lands; they can wage psychological warfare from the comfort of an office chair.

Armed with a range of fake personas, from internet trolls to faux social media profiles, these agents are deployed across multiple platforms. Their mission? To amplify societal divisions, propagate falsehoods, and subvert democratic processes. While their disinformation campaigns have a global reach, one target stands out for its high-profile nature — the Finnish investigative journalist Jessikka Aro.

A Modern Saga of Digital Intimidation

Jessikka Aro’s scrutiny of the Internet Research Agency was met with an unyielding and insidious campaign to discredit her. As she started piecing together the scope of Russia’s digital disinformation architecture, she found herself under constant attack. But the harm wasn’t restricted to just spreading fake news; it was far more sinister, involving deep psychological warfare targeting her.

The intimidation campaign against Aro took a particularly malicious turn, with perpetrators going to great lengths to damage her character. One instance involved a disturbingly high-quality music video, shared intensely across pro-Russian Facebook groups. The video featured an actress impersonating Aro, mocking her as a “stupid blonde” and framing her as an American spy “hunting for trolls”. While the video may seem absurd and silly to the casual observer, for Aro, it was a clear warning sign. The video culminated in a particularly unsettling scene: a doll crafted to resemble her was deliberately hit by a car (present in the original upload). This macabre finale not only accentuated the venomous intent behind the campaign but also served as an clear threat to her physical safety.

Moreover, articles published about her painted a distorted picture, labeling her as a NATO spy intent on illegally identifying Finnish Putin supporters. These stories also exploited outdated court records from her youth, framing her as a drug addict based on a past fine for drug use. This misrepresentation sought to destroy her professional credibility and taint her personal reputation. Disturbingly, it worked to some extent. Even some of Aro’s friends, who should have been pillars of support, turned against her after buying into the avalanche of fabrications. She even had to report some of them to the police for making threats against her, a poignant example of how corrosive and influential disinformation can be.

Weaponizing Personal Information

The lengths to which the trolls went to discredit Aro were extensive and deeply personal. One method involved “doxxing” — publishing her personal information online. This led to a barrage of threatening phone calls and hate messages, including explicit threats to her life, with phrases such as “you fucking crack whore, I want to kill you” hurled at her. Aro describes the experience as “one of the most torturing things ever done against her.”

As Aro has shown, the Russian disinformation campaign is not merely about pushing false narratives; it’s a deeply personal, vengeful, and ongoing war against those who dare to expose them. The distress and isolation that Aro experienced demonstrate the powerful and corrosive impact of this digital warfare.

The 2016 U.S. Presidential Elections: A Crucible for Cyber-Enabled Influence Operations

If the targeted campaign against Jessikka Aro was a masterclass in personalized digital intimidation, the 2016 U.S. Presidential elections represented an epoch-making leap in the realm of political interference. With a stage as vast as the United States and stakes as high as the highest office in the land, the Internet Research Agency pulled out “the big guns”.

The Agency’s modus operandi wasn’t simply a cut-and-paste of previous disinformation efforts; it was a sophisticated, multi-pronged strategy aimed at sowing discord and polarizing the American electorate. Utilizing an arsenal of falsified accounts across social media platforms like Facebook, Twitter, and Instagram, these agents disseminated a deluge of misleading memes, viral videos, and hyper-partisan articles.

Exploiting the Divides

One of the most insidious aspects of the campaign was its knack for exploiting pre-existing social and political fissures in American society. Issues like gun control, racial tension, and immigration were not just talking points; they were kindling for a firestorm of disinformation. The objective was not merely to promote one candidate over another but to provoke outrage, incite conflict, and create an atmosphere rife with mistrust and suspicion.

The scale and precision of the operation became fully apparent after the election, through a series of investigations led by both independent organizations and federal agencies. What they unearthed was nothing short of stunning: calculated attempts to manipulate voter sentiment, the hijacking of local news feeds, and even efforts to mobilize “real-world” protests and counter-protests, deepening divisions within the American populace.

Houston, We Have a Problem

In May 2016, two opposing rallies unfolded outside the Islamic Center in Houston, Texas. One was an anti-Islamization protest, organized via a Facebook group called “Heart of Texas” with a folksy tagline about guns and barbecue. The other was a counter-protest, organized by a group called “United Muslims of America,” celebrating Islamic knowledge. Unbeknownst to the participants, both Facebook groups were operated by Russian trolls from the Internet Research Agency. Their aim? To deepen divisions and stir strife. And the most astonishing part? They pulled it off for a mere $200.

Florida’s Orchestrated Flash Mob

Similarly audacious was an episode in Florida. Russian trolls, posing as Trump-friendly film producers, contacted local Trump supporters with a specific task: dress someone up like Hillary Clinton in a prison uniform, put her in a cage, and incite the crowd to chant “Lock her up.” Ann-Marie Margareth Thomas assumed the role of Hillary, even making her prison outfit from an old nurse’s uniform. This flash mob, like the Texas rally, was orchestrated from a troll factory 8500 kilometers away in St. Petersburg, Russia.

The rally did really take place, attracting hundreds and garnering media coverage, perpetuating the anti-Clinton sentiment that was boiling over in certain segments of the American populace. It wasn’t until later, through intensive investigations, that the true origins of this rally were revealed, sending shockwaves through the community and adding another layer to the labyrinthine plot spun by the Internet Research Agency.

Implications for Democracy

Both events serve as cautionary tales that outline the sophistication and reach of state-sponsored digital manipulation. The trolls didn’t merely create memes or false narratives; they successfully incited real Americans to take real actions that had tangible, divisive consequences. Moreover, they managed to keep their actions under the radar, fooling not just the American public but also tech giants who pride themselves on advanced, algorithm-driven platforms.

The incidents leave us wondering how free societies can safeguard against such external manipulations without compromising the democratic values they hold dear. These episodes demonstrate the urgent need for increased vigilance, not only among the authorities but also among everyday citizens, who must become more discerning consumers of information in an age rife with disinformation.

By leveraging a relatively small amount of resources, Russian trolls staged events that left a lasting imprint on the American psyche. The question remains: what can be done to prevent such cost-effective yet impactful manipulations in the future?

Facing the Inevitable Scrutiny and Shielding Ourselves From Disinformation

As tomi grows and moves closer to fulfilling its mission, the scrutiny from governments, companies, and large institutions is not just a possibility — it’s more likely a certainty. Our alternative infrastructure challenges existing power structures, potentially disrupting how information, finance, and governance will operate in the future. While this aligns perfectly with the ethos of decentralization that we hold dearly, there will be those who oppose us because they benefit from maintaining the status quo. Therefore, it’s not far-fetched to expect targeted disinformation campaigns aimed at delegitimizing our efforts.

The good news is that we’re not entirely defenseless against such tactics. The notion of a psychological “vaccine” against misinformation offers a compelling pathway to resilience. Psychologist Sander van der Linden has applied principles from epidemiology to develop a two-step approach: forewarning and “prebunking.”

In essence, “prebunking” involves two critical steps: initially alerting people to the likelihood of encountering manipulative tactics, and then exposing them to a weaker form of the misinformation along with strategies to debunk it. This proactive approach can offer a layer of protection not just for individuals but for our community as a whole. Given the scrutiny and potential disinformation campaigns we’re likely to face, should we perhaps consider creating our own series of educational content that aims to “prebunk” common manipulative tactics?

Fortifying Our Ecosystem

While prebunking serves as a strong first line of defense, it shouldn’t be our only strategy. Alongside this, we need to encourage continued education, fact verification and community accountability, among many other things. The nature of the disinformation threat is ever-evolving, and we can’t predict its form or timing. This unpredictability necessitates a vigilant stance and a commitment to developing real-time countermeasures against emerging forms of disinformation. Taken together, these collective actions construct a robust, multi-layered shield against these threats.

As we build upon the discussions that introduced this article and this coming series, we also need to expand our focus to include self-preservation strategies against the digital pathogens of misinformation. By adopting a comprehensive approach that includes education, vigilance, technological safeguards, and perhaps even a psychological “tomi vaccine,” we prepare ourselves to meet future challenges head-on.

You’ve got to remember that the decentralized ecosystem that tomi is building is not just a technological revolution; it’s a socio-cultural movement. As we inch closer to the realization of our vision, let’s also become more vigilant guardians of that vision. What we are creating has never been done before. We’re trying to create a new digital culture, and you are the main contributor to how that culture will evolve. Let’s not underestimate the lengths to which those who appose us, will go to undermine it, and let’s arm ourselves and our community with the tools needed to combat these threats effectively.

The journey ahead is filled with both opportunity and risk, but we are neither naive idealists nor helpless targets. We are a community of informed, empowered individuals, and together, we’re more than capable of turning the tables on those who seek to mislead us.

Follow us for the latest information:

Website | Twitter | Discord | Telegram Announcements | Telegram Chat | Medium | Reddit | TikTok

--

--