Photo by James Pond on Unsplash

Disinformation Is a Cybersecurity Threat

Cognitive hacking is a threat from disinformation and computational propaganda. It is a cyberattack that exploits psychological vulnerabilities, perpetuates biases, and eventually compromises logical and critical thinking giving rise to cognitive dissonance.

Ashish Jaiman
Published in
8 min readJan 30, 2021

--

The Book is now available at Amazon — https://www.amazon.com/Deepfakes-aka-Synthetic-Media-Humanity-ebook/dp/B0B846YCNJ/

Cybersecurity focuses on protecting and defending the computer systems, networks, and digital infrastructure from hacking and disrupting our digital lives. Nefarious actors use attacks to compromise confidentiality, integrity, and availability of the IT systems for their benefit. Disinformation is similarly an attack and compromise of our cognitive being. Nation-state actors with geopolitical aspirations, ideological believers, violent extremists, and economically motivated enterprises manipulate information to create social discord, increase polarization, and in some cases, influence the election outcome.

The difference between a disinformation attack and a cyberattack is the target. Cyberattacks are aimed at computer infrastructure, while disinformation exploits our inherent cognitive biases and logical fallacies. In traditional cybersecurity attacks, the tools are malware, viruses, trojans, botnets, and social engineering. Disinformation attacks use manipulated, mis-contextualized, misappropriated information, deepfakes, cheapfakes, and so on.

There is a lot of similarity of the actions, strategies, tactics, and harms from cybersecurity and disinformation attacks. Infact, we see nefarious actors using both types of attacks for disruption. Using information operations and cyberattacks creates more havoc. They may lead with disinformation campaigns as reconnaissance for a cyberattack or data exfiltration from cyberattacks to launch a targeted disinformation campaign.

Historically, the industry has treated these attacks separately, have deployed different countermeasures, and even have different teams working in silos to protect and defend against these attacks. The lack of coordination between teams also leaves a gap that is exploited by the actors.

Cognitive Hacking

Cognitive hacking is a threat from disinformation and computational propaganda. It is a cyberattack that exploits psychological vulnerabilities, perpetuates biases, and eventually compromises logical and critical thinking giving rise to cognitive dissonance. The goal of disinformation is to change individual thought and behavior, galvanize societies to disrupt harmony. A cognitive hacking attack attempts to change the target audience’s thinking and action using disinformation. The goal is to manipulate the way they perceive reality. The attack on the US Capitol by right-wing groups and individuals on January 6th, 2021, is a prime example of the effects of cognitive hacking.

The implications of cognitive hacking or even more devastating than cyberattacks on critical infrastructure. The damage wrought by disinformation is challenging to repair because people form opinions based on cognitive biases, and those can be extraordinarily difficult to overcome. Influence operations shape them to believe in their heart and mind what they “know” is correct. Cognitive hacking is not a new thing. Revolutions throughout history have used these techniques to a significant effect to overthrow governments and change society. “Misinformation is by no means new — from the beginning of time, it is a key tactic by people trying to achieve major goals with limited means,” according to Rodney Joffe, the chairman of NISC. [1]

In the 1930s, the misinformation that Jews represented a threat and thus must be eliminated galvanized the German people and led directly to the Holocaust. The Americans and British successfully used misinformation to cause German commanders to believe that the European continent’s primary invasion would happen in Pas-de-Calais instead of Normandy, thus saving countless Allied lives. Closer to modern times, the ant-vax movement uses misinformation to prevent people from being vaccinated, potentially resulting in an increased death rate from plagues and other serious diseases. QAnon spread false information claiming that the 2020 election was fraudulent, and conspiracy theorists burned down 5G towers because they believed that it is connected to the pandemic. [2]

Real world Harms

The effects of disinformation can be more destructive than those of viruses, worms, and other malware. The disinformation campaigns can wreak havoc on individuals, governments, societies, and businesses. The purpose of disinformation is to mislead and cause harm. The advertisement-centric business modes and attention economy incentivize the malicious actors to run a sophisticated disinformation campaign to fill the information channels with noise to drown the truth with unprecedented speed and scale.

Distributed Denial of Service (DDoS) is a well-coordinated cybersecurity attack achieved by flooding the target digital services and networks with superfluous requests to connect and overload the system to prevent legitimate requests fulfilled. Similarly, a well-coordinated disinformation campaign fills the broadcast and social channels with so much false information and noise, thus taking out the system’s oxygen and drowning the truth.

Disinformation is used for social engineering threats on a mass scale. Like phishing attacks to compromise the IT systems for data extraction, disinformation campaigns play on emotions giving cybercriminals another feasible method for scams. We have seen swindlers raising money for fraudulent issues using disinformation campaigns.

In 2017, in India, rumors spread on WhatsApp that a band of roving kidnappers infiltrated villages to grab young children. One version claimed that the kidnappers sold organs harvested from the children. It even included photos of a gruesome crime scene that had children’s bodies laid out in rows. These weren’t fake photos but misappropriated and miscontextualized information, aka malinformaton. They were pictures taken in Syria of children killed during a chemical attack. Violent attacks occurred to several men in the mistaken belief that they were part of this kidnapping ring. Thirty-three people were killed between January 2017 and July 2018.[3]

The highly organized and targeted social media campaigns during the US 2016 elections by Nation-State actors dramatically affected the outcome. We don’t know what would’ve happened without their interference, but it affected, at least in widening the divides between groups in the United States. If the Russian intent were to sow chaos in the election, they succeeded, probably beyond their wildest dreams.

Deepfakes and other synthetic media add a whole new level of danger to disinformation campaigns. Suffice to say that a few quality and highly targeted disinformation campaigns that take advantage of deepfakes could widen the divides between peoples in democracies even more and cause unimaginable levels of chaos with increased levels of violence, damage to property and lives.

Lessons from Cybersecurity

If we learn from the decades of experience in the cybersecurity domain to defend, protect, and respond, we will find effective and practical solutions to counter and intervene in computational propaganda and infodemic. Cybersecurity experts have successfully understood, and managed malicious threats posed by viruses, worms, hackers, and a wide variety of other issues. We must treat disinformation as a cybersecurity issue to find effective countermeasures to cognitive hacking.

A report released by Neustar International Security Council (NISC) found 48% of cybersecurity professionals regard disinformation as threats, and of the remainder, 49% say that threat is very significant. 91% of the cybersecurity professionals surveyed said that stricter measures must be implemented on the Internet.[1]

Disinformation campaign, computational propaganda, and information warfare is a cybersecurity threat.

IT systems and the Internet builders didn’t think about security till the first set of malicious actors started exploited security vulnerabilities. The industry soon learned from its mistakes and invested heavily in security best practices, making security a first-class design principle while building IT services and platforms. The technology industry developed rigorous security frameworks, guidelines, standards, and best practices with public-private collaboration. Threat modeling, secure development lifecycle, and red team — blue team (to attack the system to find vulnerabilities and fix them) are used before deploying the IT systems. ISACs (Information sharing and analysis centers) and global knowledgebase of security bugs, vulnerabilities, threats, adversarial tactics, and techniques are published to improve the IT systems’ security posture.

Today, the response to disinformation is developed in silos of each platform with little or no coordination. For example, WhatsApp limits message forwarding capability, YouTube, Twitter, and Facebook label or remove misleading posts, and some social media platforms delete or suspend accounts that engage in disinformation. There is no consistent taxonomy, definitions, policy, norms, and response for disinformation campaigns and the actors on platforms. This inconsistency enables perpetrators to push the boundaries and move around on platforms to achieve their nefarious goals.

We must develop the disinformation defense systems studying strategy and tactics from the cybersecurity domain and mitigate the disinformation threats by using the lessons learned from cybersecurity, such as building a defense-in-depth strategy, understanding the malicious actors’ identities, activities, and behaviors. For example, each social media platform works with internal and external fact-checking organizations. Suppose there was a mechanism to share the identity, content, context, actions, and behaviors of actors and disinformation across platforms. In that case, the disinformation countermeasure will scale better, and the response will be quick and swift.

Defense-in-depth is an information assurance strategy that provides multiple, redundant defensive measures if a security control fails or a vulnerability is exploited. For example, security firewalls are the first line of defense to fend off threats from external systems. Antivirus systems defend against attacks that got through the firewalls. Regular patching helps eliminate any vulnerabilities from the systems. Smart identity protections and education are essential, so users don’t fall victim to social engineering attempts.

We need a similar defense-in-depth strategy for disinformation. The defense-in-depth model identifies disinformation actors and removing them from the system. Before the disinformation is posted, authenticity and provenance solutions can intervene. If the disinformation still gets by detection solutions using humans and AI, internal and external fact-checking can label or remove the content. When disinformation of any type is detected, organizations must defend, protect, and, if appropriate, respond.

There are laws and regulations for cybersecurity criminals. A global initiative, Paris Call[4], bring the international community together to ensure peace and security in the digital space. The third Principle focuses on election interference, and the community’s work dedicates significant focus to disinformation. More than 1000 entities, including Governments, NGOs, and the private sector, have signed for stability and security in the information space. Similarly, 52 countries and international bodies have signed the Christchurch Call[5] to eliminate terrorist and violent extremist content online.

A critical component of cybersecurity is education. The technology industry, civil society, and the government have done a lot of work to make users aware of the threat vectors like phishing, viruses, and malware. People are the weak link in disinformation because they have logical fallacies and cognitive biases. People tend to share false information that sounds good or “feels real” without spending any time to validate it. The industry with public-private partnerships must also invest in the threat posed by disinformation. An effort must be made to train people to become better at discerning false information from real information.

Freedom of speech and freedom of expression are protected rights in most democracies, including the United States. Balancing the rights of speech with the dangers of disinformation is challenging for policymakers and regulators.

The disinformation infodemic requires a concerted and coordinated effort by governments, businesses, NGOs, and other entities to create standards and implement defenses. Taking advantage of the frameworks, norms, and tactics that we’ve already created for cybersecurity is the optimum way to meet this threat. We must protect our society against the threats or face the real possibility of societal breakdown, business interruption, and violence in the streets.

--

--

Ashish Jaiman

Product Leader @Microsoft Bing #Growth #Monetization #Community #Generative #Responsible AI #Startup Published Author https://www.linkedin.com/in/ashishjaiman