Dismantling
spyware disinformation campaigns
In early 2022, just as the pandemic was beginning to get a bit more manageable and we could all see the light at the end of the tunnel, I spotted a Twitter user sharing misguided information on Pegasus — a mercenary spyware developed by a company called NSO. Since then I found myself in the centre of a disinformation storm, trying to juggle dealing with personal attacks and providing support to anyone who was confused as to what are the facts and what are the conspiracy theories. Along the way I have discovered many disinformation agents who produce a lot of content which tries to exonerate mercenary spyware abusers. This is a story of what I have learned and what you may find useful.
Stages of spyware disinformation
Each spyware disinformation campaign shares three main stages, which may happen continuously. The disinformation agent tries to manufacture authority, create disinformation droplets, and boost their message. Each of the stages requires different techniques and some clever tactics.
Manufacturing authority
Creating the appearance of authority even if there isn’t one is an important step. The disinformation shared later on by an agent will be false and even non-technical users will approach it with a dose of scepticism, creating the illusion of being an expert helps to make the claims more credible. There are several ways this can be done.
One way is to carefully frame academic achievements. For example, a disinformation agent can claim to be a PhD student but will not disclose which university they are attending for the fear of being discredited. This can be one of the online-only diploma mills which start their PhD programme every Monday.
Another way to create an illusion of authority is by carefully wording their academic title (e.g. “professor”) without specifying their research field. For example, one of the letters pushing back on Citizen Lab claims with regards to CatalanGate was signed by, to quote a disinformation agent, “over 100 professors”. However, the list of professors, as far as I was able to verify, contains at most 2 academics who are tangentially related to computer science (I have to note that I wasn’t able to verify credentials of some of the academics). Besides these two, there are people who specialise in marine biology, literature or the history of philosophy. Of course, that is not to say that a marine biologist or e.g. pharmacist cannot be an expert in cybersecurity. However, it is expected that while signing such a letter the person should sign it with their cybersecurity affiliations. Interestingly, even though the letter is written in English, the academic affiliations are given in Spanish.
Another technique slowly becoming more popular is providing innocuous, true, high-level quotes to the media in order to get the disinformation agent’s name known as an expert in the field. The quotes are non-controversial statements everyone can agree with (e.g. “malware is bad”) with a purpose of being able to reference the appearance in a media publication later when spreading disinformation. This tactic is particularly effective because the media publication is unlikely to take down the quote without lacking significant context around the person who provided the quote. After all, it is a correct statement.
Yet another method is to have a significant number of followers on social media. It is not important how you got these followers, but as long as your account is seasoned enough people will start believing your claims. This is particularly true in the computer security field, in which people come from all walks of life and academic background or formal qualifications are much less important than peer validation of their work.
Creating disinformation droplets
I have read an article (which I cannot find anymore) that called disinformation snippets “droplets”. This is because they can pool together to create bigger narratives and, if repeated often, are able to crush even a rock. The main goal of disinformation agents is to have an audience for their claims. Since they are not confined by being truthful, they can try as many different and confusing narratives as possible and see which ones go together and which ones they have to drop.
Some droplets are focusing on their form, not their content. The letter mentioned in the previous section looks very formal, even though when you start looking closely at the details the message quickly falls apart. One of the most striking examples of form over content are flashy reports published on social media websites. They look like high quality content, but they do not deliver any sensible arguments. I have already debunked some of them (for example in “Misinformation in malware analysis”) and they contain laughably bad logic with basic errors, e.g. not understanding how a calendar works.
Other droplets tend to target people who publish information about the spyware, but do not target the actual contents of the report. The attacks are a misguided attempt at creating a conspiracy theory. For example, one of the droplets tries to convince the audience that the authors are working for Russia asking “why haven’t these authors talked about the Russian spyware as much as they talk about non-Russian spyware?” Obviously this is meant to give rise to new conspiracy theories, while simultaneously hedging behind the “just asking questions” claim.
Another droplet can directly target victims of spyware. By painting them as criminals or even terrorists the disinformation agents try to dismiss the report of spyware abuse as just a regular law enforcement activity. The attacks are particularly vicious to make the audience lose any sympathy towards people whose electronic lives have been completely stolen and they are now afraid to use any devices. Sometimes they go as far as contacting the victims under false pretences to gather some information which can be used against them.
Interestingly, even the disinformation agents themselves cannot agree on an appropriate volume of personal attacks. A disinformation agent and a peer reviewer for one of the disinformation papers noted:
[…] although I unsuccessfully him [sic] to avoid including certain personal judgements, which do not benefit his good technical results. It was published in Reseachgate[…]
Finally, some droplets are meant to obscure reality as much as possible. They will mix technical terms to create technobabble, which to any expert is just full of incoherent sentences, but to a non-expert audience may look credible. This can be for example a confusion between file name and process name or claiming that a process name is benign because a similar process name can be found on many devices. Of course, to any expert, malware pretending to be a different process by using a similar but not the same process name as a benign process is an obvious and popular cloaking technique. However, to a non-expert, it may be viewed as an error in judgement.
Boosting the message
Once the disinformation agent has manufactured authority and created disinformation droplets they have to make sure that their message reaches a bigger audience. This is a crucial step in a disinformation campaign. Any campaign’s primary goal is to reach as many people as possible using any techniques possible.
For spyware disinformation the boost may come from the government or internet trolls which align their views with the government. This is an extremely easy way to reach an audience, since these accounts do not care about the actual truth (much like the disinformation agents) but they do care about the message. They may even openly admit it by writing, for example, “I just want to point out that the experts disagree on this and it is for people to choose what is right” — as if the underlying absolute truth as to whether spying took place did not exist. This boost is particularly strong in droplets containing personal attacks, as the political spectrum shifted from discussing ideas to personally attacking opponents.
Another way to boost the message is through fake news websites. These websites, pretending to be journalistic outlets, push articles which look professional but contain pure disinformation. The layout and the name of the website are there to create an illusion of being just another big news medium, while in reality it is just another internet blog. However, there is a fine like to thread here — too much disinformation and people will brand the website as untrustworthy. These websites have to also include actual news to make them more credible.
Finally, some preprint servers are just social media — like ResearchGate. They exercise very little editorial control and will host spyware disinformation reports even if the reports are wrong. This is particularly dangerous, because the preprint servers are viewed by general audience as a source of scientific papers. In reality almost anyone can upload any PDF file to them and claim that they have “published a paper”. ResearchGate is just another social media network with a difference that all the posts have to be submitted in a PDF format.
What should we do?
We need to fight spyware disinformation. Depending on your level of technical expertise and your job description you may take different actions. However, I believe that everyone can take part in this effort.
If you are an activist you can focus on fighting the illusion of authority. Humour is the best weapon against any authority. Pointing out the differences between perceived and actual authority in a humorous way can help make your message more powerful (and very likely more shareable). This tactic is fairly well known to anyone who ever experienced an authoritarian regime. For example in communist Poland artists found humorous ways to point out problems with politicians and the political system.
You can also contact institutions whose names are being abused. For example, if a disinformation agent decided to use the university name to give themselves more credibility and create an illusion of doing research as a part of their university course, you can let the university know. At the very least they will lose plausible deniability, at most they will take some action. You can also provide context to a media publication which is used to manufacture credentials of a disinformation agent by providing a space for their quotes.
As an expert you can debunk the disinformation droplets. Remember that you are not speaking to other experts or to disinformation agents or even to the people who boost the disinformation droplets, but to the general public. You can provide simple, understandable points which illustrate issues with the original statement. Try to limit the use of technical language as that will differentiate your debunking from technobabble used to create the droplet. Finally, you can also provide support to media and journalists who try to understand the issues. From my experience a combination of simple debunking directed at general public and personal outreach worked best to combat disinformation as an expert.
If you are a journalist or you work in media, try to understand the bigger context behind these campaigns. Some people have tracked these campaigns for a very long time and they may take certain information for granted. Sometimes the context to a quote may be more important than the quote itself. Reporting on disinformation agents is also very successful, although sometimes it is done too late.
In one example disinformation agents were able to get invited as experts in front of a committee which investigated spyware abuse. The committee lacked necessary context to understand why this move is wrong and why the testimony should not happen. Only then, after the invitation was partially rescinded, the media started reporting on the disinformation campaign. Earlier reporting could have avoided the invitation altogether.
Finally, you can also highlight the debunking efforts made by the community. Debunking is difficult and requires a lot more work than creating disinformation droplets. There are high quality debunking efforts: blog posts, mailing lists or even reports published on ResearchGate, which make it easy to understand the core issue.
Why not simply ignore it?
The question I get asked the most is: why not simply ignore spyware disinformation efforts? We, the experts, know that the disinformation agents are wrong and so just ignoring them and not giving them a boost should be enough. If everyone ignored them there would not be any problems.
The problem with that approach is that not everyone will ignore them. The governments have an obvious stake in spreading reports which exonerate them. Trolls, who align with government politics, obviously will boost the message. Some media, mostly aligned with the state, will also report these droplets. Politicians will use it as a “proof” in their arguments. We have to drop the notion that it will simply go away.
For those who prefer bullet points
The lessons I have learned while being in the middle of this storm can be summarised in simple points.
- Debunking is meant for the general population and media, not for experts.
- Try not to engage directly with disinformation agents. If their post goes viral write a reply directed to the people who see the post, not to the disinformation agent.
- Try not to engage directly with troll accounts who boost disinformation.
- DMs are an effective tool when explaining disinformation to media or interested people. Doing it in public creates too much noise.
- Do not assume that the institutions which names are being used in disinformation campaigns approve or even know about them.
- Describe disinformation campaigns on both high and low level. Overviews are very useful to give context in case that is needed.
- Block and mute the trolls, but not disinformation agents.
- Trolls and disinformation agents do not argue in good faith. Do not debate them. You have to work with truth, they can work with anything.
- Know when to stop — this is the one that is the hardest. At some point you will exhaust the pool of people who are genuinely interested in getting to the truth. The rest will never be convinced. That is when you stop.
Finally, take breaks. Often. Hydrate.