Why we need a joint investigative mindset to combat organized disinformation networks

Organized disinformation is an ongoing, investigative story. Who is behind networks, and what are the links and money transfers?

Anna Gielewska
JSK Class of 2020
5 min readDec 13, 2019

--

Credit: stock.adobe.com

If it weren’t for investigative journalists, we would have no idea how much we’ve been targeted and manipulated on social media over the last five years (remember the Cambridge Analytica investigation). We would know little about computational propaganda, if not for the cooperation of cyber experts and academic researchers, who have developed tools to track how Russian disinformation is spread around the globe. The whistleblowers, who challenged the “disinformation industry” through their stories and documents, have also been a key to our understanding.

But we still don’t know enough, especially since the world of organized disinformation is rapidly changing and using new tactics. So far, the journalistic methods for countering this information disorder have been mainly focused on fact-checking and media literacy initiatives. Media literacy initiatives, as long-term educational projects, can immunize an audience against disinformation and misinformation, but it takes years. Fact-checking initiatives, when implemented in cooperation with newsrooms, have influenced the quality of work in a positive way. As external initiatives, they struggle to have impact — debunked information never reaches as big an audience as fake news. And when there is an ongoing disinformation campaign, fact checking is simply triage after the battle.

So how might we act sooner? “If you are chasing what is right now, you are late,” says Adam Tobin, a senior lecturer in the Film and Media Studies program at Stanford, about predicting the market for the next “big thing” in the movie industry. When I heard him say this at one of our JSK Fellowships sessions, it struck me as the perfect description of investigating the organized disinformation market.

While journalists have developed new skills to spot trolls and verify information on social media platforms, malicious content is being spread through new channels; mostly in closed groups using messaging apps. Such groups are more challenging to investigate, especially as disinformation campaigns target forgotten communities, news deserts, etc. Also, the sources of attacks become more difficult to locate as more than Russia state-actors come in: from China to Brazil, to Iran, to Saudi Arabia, and Turkey, to name just a few. The whole picture looks even darker when we add to the list the development of deep fakes and AI technology, which is more commonly used to amplify manipulated content.

In Central Europe, domestic populists and growing far-right movements follow tactics from the disinformation and propaganda playbook for their own political purposes. The audience, targeted simultaneously by various external and internal disinformation campaigns, begins to be unable to differentiate between truth and lies. At the same time, the level of press freedom in countries like Poland and Hungary constantly decreases.

Before coming to Stanford to explore how we can better deal with organized disinformation networks, I asked cyber experts in Poland for their insights into the latest Russian attacks. They stressed that focusing on trolls and bots is not enough anymore, because Russians are constantly testing new types of operations. For example, instead of creating fake accounts, they take over existing ones and collaborate with real people who operate them. Therefore, hundreds of agents in different countries are recruiting and training real people to spread their narratives locally. Stanford experts from the Internet Observatory have recently described in detail how such operations work in African countries.

If social media platforms used to be compared to a big city, with its dangerous streets and suburbs, now they look more like a place taken over by organized crime groups fighting each other. So, what can we learn about tackling disinformation networks from investigative journalists’ approach to organized crime and corruption?

First of all, they cooperate. Cross-border teams search through thousands of business files or leaked data to discover hidden networks and money flows. Many people in many countries, working for months on one story. They support their work by building and developing open source databases and applying machine learning to big data.

Organized disinformation is an ongoing, investigative story. Questions to answer are well known: who is behind networks, what are the links and money transfers? Yes, again, follow the money. I believe that journalists can take more initiative as they gain new expertise. To better deal with investigating organized corruption, we had to develop our understanding of business files and international financial operations. Now we need to learn more of the backstage of the disinformation industry and apply a cross-border, investigative mindset to uncover the links.

In the best scenario, this mindset could be built with a cross-sector approach. While I’m at Stanford, I am exploring this question: How might we strengthen networks, including investigative journalists, academic researchers, cyber experts and watchdogs, to investigate and monitor organized disinformation?

There are some good examples of collaborative networks established in European countries during their elections, yet they focused mainly on fact-checking and debunking. I am analyzing how we can more effectively investigate key disinformation actors, their distribution channels, methods and money transfers in the long run.

So far, it seems clear that there are some differences in approaches, agendas and goals to consider. For example, academic researchers who get data from platforms like Facebook are more sensitive about protecting the identities of disinformation actors. So they often decide (or they are obliged by an agreement with the platform) not to reveal key data that could be an entry point for investigative journalists. Cyber operations specialists, who have access to the most developed tools to track disinformation networks, are focused mainly on running actions behind the public’s back; while OSINT experts and analysts usually have more a business-oriented approach.

Despite these distinctions, there might be a shared goal, which is to protect democracy. And there is also at least one shared challenge: very limited access to data from social media platforms, especially from Facebook, which still has the most nontransparent policy. At the same time, the company has been promoting itself as working to counter disinformation by announcing partnerships with media outlets and fact-checking organizations across the world. However, the platform usually provides journalists only the limited data it wants to share, and just for fact-checking. So as a result of journalistic work, Facebook can shut down fake pages, but it is unwilling to share fundamental information. How might journalists navigate concerns and dilemmas emerging from this situation? To what extent might big tech companies be partners in countering disinformation, when they are also the subjects of investigations? These are some of the questions I would like to explore further.

In the upcoming months at Stanford, I would like to further research these issues and focus on the best methods and tools to investigate organized disinformation networks. If you are a journalist, academic researcher, cyber expert, watchdog or tech company person, interested in discussing these ideas and sharing insights, please reach out to me!

Email: gieanna@stanford.edu

TT: @agielewska

--

--

Anna Gielewska
JSK Class of 2020

JSK 2020 Stanford Fellow. Political and investigative reporter/Poland. Vice-pres./Reporters Foundation. Coordinator/vsquare.org. Author, media trainer.