Artificial Intelligence for Cyber-Security: A Double-Edge Sword

Sciforce
Sciforce
Published in
8 min readJan 16, 2020

Artificial intelligence (AI) and machine learning (ML) have shown significant progress in recent years, and their development has enabled a wide range of beneficial applications. As they have started penetrating into more touchy areas, such as healthcare, more concerns have arisen as to their resilience to cyber-attacks. Like any other technology, AI and ML can be used to threaten the security or to improve it with the new means. In this post, we’ll discuss both sides of ML, as a tool for malicious use and a means to fight cyber-attacks.

From a security perspective, the rise of AI and ML is altering the landscape of risks for citizens, organizations, and states. Let’s take the ability to recognize a face and to navigate through space with the help of computer vision techniques and you can create an autonomous weapon system. NLG, the machine’s ability to generate text and speech, can be used to impersonate others online, or to sway public opinion.

AI Security Threats

First of all, let’s discuss what it is possible to do with AI-based systems. All cyberattacks can be divided into the most common triad of confidentiality, availability, and integrity, intertwined to form three main directions:

Espionage, which in terms of cybersecurity means gleaning insights about the system and utilizing the received information for his or her own profit or plotting more advanced attacks. In other words, a hacker can use a ML-based engine to drill down and learn more about the internals like dataset.

Sabotage with the aim to disable functionality of an AI system by flooding AI with requests, or model modification

Fraud, which in AI means misclassifying tasks, such as introducing incorrect data in the training dataset (data poisoning) or interacting with a system at learning or production stage.

How can ML be misused to carry out attacks?

This is the question that worries everyone: from an old lady who was told that all her banking data will be processed digitally (even though she wouldn’t use the word “AI”) to the UN officials.

The truth is, AI systems have inherent characteristics that foster attacks. AI systems as a part of the digital world increase anonymity and psychological distance. We may automate a lot of tasks, but it also allows actors to experience a greater degree of psychological distance from the people they impact. For example, someone who uses an autonomous weapons system to carry out an assassination avoids the need to be present at the scene and the need to look at their victim.

AI algorithms are open and can be reproduced with some skills. It is difficult and costly to obtain or reproduce the hardware, such as powerful computers or drones, but everyone can gain access to software and relevant scientific findings.

On top of all, AI systems themselves suffer from a number of novel unresolved vulnerabilities, such as data poisoning attacks (introducing training data that causes a learning system to make mistakes), adversarial examples (inputs designed to be misclassified by machine learning systems), and the exploitation of flaws in the design of autonomous systems’ goals . These vulnerabilities differ from traditional software vulnerabilities (e.g. buffer overflows) and require immediate action to protect AI software.

Malicious use of AI can threaten security in several ways:

  • digital security by hacking or socially engineering victims at human or superhuman levels of performance;
  • physical security by affecting our personal safety with, for example weaponized drones; and
  • political security by affecting the society through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns.

Digital security

  • Automation of social engineering attacks: NLP tools allow mimicking the writing style of the victim’s contacts, so AI systems gather online information to automatically generate personalized malicious websites/emails/links that are more likely to be clicked on.
  • Automation of vulnerability discovery: Historical patterns of code vulnerabilities can help speed up the discovery of new vulnerabilities.
  • Sophisticated hacking: AI can be used in hacking in many ways. It can offer automatic means to improve target selection and prioritization, evade detection, and creatively respond to changes in the target’s behavior and it can imitate human-like behavior driving the target system into a less secure state
  • Automation of service tasks in criminal cyber-offense: AI techniques can automate various tasks that form the attack pipeline, such as payment processing or dialogue with ransomware victims.
  • Exploiting AI used in applications, especially in information security: Data poisoning attacks are used to surreptitiously maim or create backdoors in consumer machine learning models.

Physical security

  • Terrorist repurposing: Commercial AI systems can be reused in harmful ways, such as using drones or self-driving cars to deliver explosives and cause crashes.
  • Attacks removed in time and space: As a result of automated operation, physical attacks are further removed from the attacker, including in environments where traditional remote communication with the system is not possible.
  • Swarming attacks: Distributed networks of autonomous robotic systems allow monitoring large areas and executing rapid, coordinated attacks.
  • Endowing low-skill individuals with high-skill capabilities: While in the past executing attacks required skills, such as those of a sniper, AI-enabled automation of such capabilities — such as using self-aiming, long-range sniper rifles — reduces the expertise required from the attacker.

Political security

  • State use of automated surveillance platforms: State surveillance powers are extended by AI-driven image and audio processing that permits the collection, processing, and exploitation of intelligence information at massive scales for myriad purposes, including the suppression of debate.
  • Realistic fake news: Recent developments in image generation coupled with natural language generation techniques produce highly realistic videos of state leaders seeming to make inflammatory comments they never actually made.
  • Hyper-personalised disinformation and influence campaigns: AI-enabled analysis of social networks can identify key influencers to be approached with (malicious) offers or targeted with disinformation. On a larger scale, AI can analyse the struggles of specific communities to fed them personalised messages in order to affect their voting behavior.
  • Manipulation of information availability: Media platforms’ content curation algorithms are used to drive users towards or away from certain content to manipulate their behavior. One of the examples are bot-driven large-scale denial-of-information attacks that are leveraged to swamp information channels with noise, creating an obstacle to acquiring real information.

Though there are lots of ways for AI to breach our safety and security, the question remains if it can be used also to forecast, prevent, and mitigate the harmful effects of malicious uses.

How can ML help us to increase the security of applications and networks?

AI offers multiple opportunities for hackers and even terrorists, but at the same time, artificial intelligence and security were — in many ways — made for each other. Modern ML techniques seem to be arriving just in time to fill in the gaps of previous rule-based data security systems. In their essence, they try to fulfill several tasks that allow improving security systems and preventing attacks:

  • Anomaly detection — the task that defines normal behavior falling within a certain range and identifies every other behavior as an anomaly and thereby a potential threat;
  • Misuse detection — an opposite task that identifies malicious behavior is identified based on training with labeled data and allows through all traffic not classified as malicious;
  • Data exploration is a technique to identify characteristics of the data, often using visual exploration which directly assists security analysts by increasing the ‘readability’ of incoming requests.
  • Risk assessment is another task that estimates the probability of a certain user’s behavior to be malicious, which can either be done by attributing an absolute risk score or classifying users based on the probability that they are bad actors.

Artificial Intelligence and Security Applications

  • Defense against hackers and software failures: The software that powers our computers and smart devices is subject to error in the code, as well as security vulnerabilities that can be exploited by human hackers. Modern AI-driven systems can search out and repair these errors and vulnerabilities, as well as defend against incoming attacks. For example, AI systems can find and determine whether the bug is exploitable. If found, the bot autonomously produces a “working control flow hijack exploit string” i.e. secures vulnerabilities. On the predictive side,such projects an artificial intelligence platform called AI2 predict cyber-attacks by continuously incorporating input from human experts.
  • Defense against zero-day exploits: Protection against such attacks is crucial since they are rarely noticed right away. It usually takes months to discover and address these breaches, and meanwhile large amounts of sensitive data is exposed. Machine Learning protect systems against such attacks by identifying malicious behavior by identifying abnormal data movement and help spot outliers
  • Crime prevention: Predictive analytics and other AI-powered crime analysis tools have made significant strides. Game theory, for example can be used to predict when terrorists or other threats will strike a target.
  • Privacy protection: Differential privacy has been written about for some years, but it’s a relatively new approach with mixed feedback as to its scalability. It offers a way to maintain private data on a network, while providing targeted “provable assurances” to the protected subpopulation and using algorithms to investigate the targeted population. This type of solution can be used in trying to find patterns or indications of terrorists in a civilian population, find infected citizens within a larger healthy population, amongst other scenarios.

Potential applications of AI for protection of industry and consumers

The field of artificial intelligence is growing constantly, embracing new techniques and creating new systems that could not be even imagined a decade ago.

An example of such development is IoT-based security: The Internet of Things (IoT) is enabling cost-efficient implementation of condition-based maintenance for a number of complex assets, with ML playing a driving role in the analysis of incoming data. With the resources that IoT provides, the process of anomaly detection and, therefore, failure and crime prevention will become significantly more effective and rapid.

The potential for the use of AI applications in improving security is limited only by our imagination, since AI can upgrade the existing approaches and come up with completely new ones. Just a few examples of application categories that can be examined:

  • Spam filter applications;
  • Network intrusion detection and prevention
  • Credit scoring and next-best offers
  • Botnet detection
  • Secure user authentication
  • Cyber security ratings
  • Hacking incident forecasting, etc.

Conclusion

AI is a dual-use area of technology: the same system that examines software for vulnerabilities can have both offensive and defensive applications, and there is little technical difference between the capabilities of a drone delivering packages and those of a drone delivering explosives. Since some tasks that require intelligence are benign and others are not, artificial intelligence is inherently dual — but so is human intelligence.

--

--

Sciforce
Sciforce

Ukraine-based IT company specialized in development of software solutions based on science-driven information technologies #AI #ML #IoT #NLP #Healthcare #DevOps