Artificial intelligence against cyber crime

Experts at DEKRA
DEKRA Product Safety
4 min readAug 30, 2018

It is possible to repel cyber attacks by using artificial intelligence (AI). However, AI systems can also attack. Criminals have long used this for malicious purposes. Will there be direct battles of AI defense systems against AI attack systems in the future?

Artificial intelligence (AI) enables us to use voice assistants, helps in caring for the elderly and helps cars to recognize traffic signs and make driving decisions.

Not surprisingly, the bad guys have also discovered the power of self-learning algorithms: security researchers report that cyber criminals are applying bots, which coordinate and optimize their attacks through machine learning. In the future, machine learning could help internet criminals refine phishing campaigns. Security software provider Avast says 2018 might just be the ‘year of AI attacks’ in their “Threat Landscape Report 2018”.

In a worst-case scenario, according to Avast’s analysts, attackers will start using artificial intelligence to search for weaknesses in systems that are also based on artificial intelligence. Security researchers from the United States have already demonstrated how traffic signs, for example, can be manipulated in a way unrecognizable to the human eye whereas an AI system will identify a stop sign instead of a speed limit — with potentially drastic consequences. If security experts and developers, in turn, increasingly count on AI to meet the described attack scenarios, the arms race could become a direct battle of AI versus AI.

Artificial intelligence scans large data volumes for threats
The good guys are arming themselves, too: according to a study of the IBM Institute for Business Value, the presence of intelligent AI security solutions will increase significantly in the next few years.

Today, machine learning is the method of choice in finding and reliably identifying future patterns and trends. Security providers are applying AI solutions to collect trends and anomalies in large data volumes (for example in data traffic within the corporate network or in incoming emails). This way, spam and phishing emails, for example, can be identified with the help of AI.

These kinds of systems have been in use on private and commercial computers for a while already: spam software and email filters have used self-learning algorithms to recognize unwanted messages for years.

AI systems consider global corporate networks
Modern AI systems are taking it one step further: they are, for example, able to identify hidden channels within corporate networks, through which data is being tapped. One of AI’s great strengths is in pattern recognition, which enables the automated recognition of a large range of security problems and anomalies.

This also means that the AI must be trained to distinguish between ‘normal’ IT glitches and cyber attacks. Furthermore, the self-learning algorithms should be able to adapt to companies’ internal information and interpret their findings based on that information. Upon request, AI-based security solutions also consider a corporation’s entire global network in such analyses, not just the local data traffic as is the case with traditional security systems. In computer forensics, for example, AI systems already act faster and in a more reliable way than comparable solutions that do not use AI.

AI analysis systems adapt to their users’ business models
Christian Nern, Head of Security Software DACH at IBM Germany, forecasts that AI-based security analyses will largely be capable of proactively recognizing and repelling attacks. IBM’s system “Watson for Cyber Security”, for example, calculates projections about how high the threat of a certain cyber attack is at any given moment. The specific tasks assigned to artificial intelligence can be adapted to their respective users’ business models. In the beginning of 2017, for example, Amazon took over Harvest.AI, whose self-learning algorithm is specialized in the detection of intellectual property.

AI systems carry out analyses independently
Some providers like Avast, Cylance and Samsung are speaking of the “first generation” of AI-based security solutions. According to this definition, the first generation is specifically designed to go through structured data like incoming emails or to clearly identify threats. The next step towards the “second generation” would be an independent execution of more broadly defined analysis tasks, such as the search for threats in the network traffic or more complex attack scenarios. AI will therefore not only automate the recognition of threats, but also the defense against them — even though a human system supervisor will most likely be the last decision-making authority in the foreseeable future. To this end, specialized AI systems provide them with well-founded decision templates. Security providers call this feature “augmented intelligence”.

Nern hopes that, in a few years, the confrontations between cyber criminals and security officers will be resolved directly between the applied AI systems. In this future, according to Nern, the actual battle between a — hopefully superior — corporate AI versus the AI used by cyber criminals may in some cases no longer even be necessary. “In that case, both AI systems simply look at each other like two angry wolves for a moment, with the inferior animal sensing its weakness and instinctively withdrawing.”

--

--