Artificial intelligence as a weapon for hackers

AI in the wrong hands; are we prepared to fight against it?

Amal Menzli
Analytics Vidhya
9 min readDec 16, 2019

--

We are living in the first step of the AI revolution and it will influence everything. Certainly, hackers have been looking to this revolution and we must have prepared for new kinds of attacks.

Note: This article is for educational purpose only.

Introduction

With the presence of artificial intelligence (AI) everywhere and the increased use of deep learning (DL), many security practitioners are being hooked into believing that these approaches are the solution for the security challenges. Nevertheless, like any tool, AI is a double-edged sword that can be used as a security solution or as a weapon by hackers.

Many security researchers and industry have told AI will be the biggest ally of security. Moreover, we can see that the increased number of companies that merging AI and Cybersecurity to keep us safe. But has anyone ever thought that these same techniques can be applied to improve the tools and methods used by hackers?

This article is for anyone interested in Deep learning from a security perspective.

Table of contents

  1. Companies merging AI and Cybersecurity to keep us safe
  2. Adversarial attacks
  3. Hacking Neural Network
  4. Advice to protect your Network

Companies merging AI and Cybersecurity to keep us safe

In this section, I will speak about AI as a security solution. When technology is so integrated into our lives, we want to do everything we can to protect it. That’s where AI comes into cybersecurity.

In a business world where customers’ privacy and data protection are vital, companies need to sharpen the focus on a strong cybersecurity culture and adopt a risk-based approach to security. To boost security and combat cyber threats, companies are adopting new technology, which is AI. I will set seven companies merging AI and cybersecurity to make the virtual world safer.

Darktrace: with more than 30 offices around the world, Darktrace has helped thousands of companies in a variety of industries detect and fight cyber threats in real-time. Darktrace’s AI platform analyzes network data to make calculations and identify patterns. Machine learning technology uses data to help organizations detect deviations from typical behavior and identify threats.

Cynet: Cynet 360 uses AI to provide for a full-protection cybersecurity protocol. AI is present in virtually every step of Cynet’s protection services, from using it to constantly scan for vulnerabilities and suspicious activities to its coordinated response efforts if a system were to be breached.

FireEye: provides businesses and organizations with comprehensive cybersecurity solutions on a unified platform that includes prevention, detection, and response. FireEye’s threat intelligence technology provides more context and priority to attacks and helps proactively defend against future threats.

Cylance: is an AI platform that helps to prevent threats before they can cause damage, predicting and protecting against file-less attacks, malware, and zero-day payload execution. Cylance’s technology operates by profiling billions of file samples, appraising files for threats, dictating whether a threat exists and quarantining infected files.

Symantec: helps governments, civilians, businesses and organizations defend clouds, endpoints, and infrastructures against threats. The company’s intelligence solutions assist security teams in keeping up with emerging threats and implementing measures to defend against those threats.

Fortinet: provides security solutions for every part of the IT infrastructure. The company’s AI-based product, FortiWeb, is a web application firewall that uses machine learning and two layers of statistical probabilities to accurately detect threats.

Vectra: Vectra’s Cognito Platform uses AI to detect cyber attacks in real-time. Combining human intelligence, data science, and machine learning, Cognito automates tasks that are normally done by security analysts and greatly reduces the work that’s required to carry out threat investigations.

As we saw, AI has a lot of power, but it’s dumb, and to compensate for that, it needs a ton of training data. But anyone can have access to large databases. So, I want to repeat the question: Are we prepared to fight against AI?

Adversarial attacks

When we look for information on using neural networks in an offensive manner, most of the articles are focused on adversarial approaches.

What is an adversarial attack?

Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake. No matter it is an intentional or unintentional adversarial attack, evaluating adversarial examples has become a trend of building a robust deep learning model and understanding the shortcomings of models. Sadly, this article can’t cover everything. But the goal is to point out some of the daunting aspects of deep learning and show that is not hard to mess around the neural network.

Hacking Neural Network

AI techniques were never designed to work in adversarial environments. They have been successful in problems where the rules are very well defined and deterministic. But in cybersecurity, the rules no longer apply. Hackers will also be using AI to boost their attack capabilities.

In this section, I will list some of the methods that can be used to exploit neural networks.

1. Attacking weights and Biases:

Let’s consider for example that we have gained access to an iris scanner which we want to bypass. While we can’t access the code, we have full access to the “model.h5” file. In fact, having access to the model file is almost as having access to code or a configuration file.

We know that the “model.h5” file is using hierarchical Data Format(HDF5), which is a common format to store the model information and data. For example, Keras uses the model file to store the entire neural network architecture, including all the weights and biases. And by doing some edits, we can modify the behavior of the network.

But of course, we must be careful when editing this file. Because when you change the amounts of inputs or outputs of a model or adding or removing layers will lead to strange effects such as errors occurring when the code tries to modify certain hyperparameters. However, we are free to change the weights and biases without breaking anything. This sort of attack works on every neural network doing classification.

To modify a “model.h5” file and force the neural network to produce a specific output, you need first to get some software that can view and edit .h5 data. For example, HDFView software. After you download and install HDFView, open the “model.h5” file as Read/Write.

You can now, explore the file and check the neural network Model layout by navigating to the /model_weights.

From there, you can see that dense_2 is the final layer.

Edit (depending on your personal preference): go to bias:0 from /model_weights/dense_2/dense_2/ and change the bias value to a high, positive number and don’t forget to hit the save button.

2. Backdooring:

This attack was highlighted in 2017. The idea was maintained from one of the oldest IT concepts. Researchers thought of teaching a neural network to solve the main task as well as specific ones. In another word, we want the model to classify everything as usual, except for a single image: Our Backdoor.

It seems that we need to train the model again from scratch and integrate the backdoor into the training set. This will work, but having access to the entire training set is often not easy. However, we can continue training the model in its current form, using the backdoor we have. It’s just poisoning a neural network.

Now, let’s modify a neural network for image classification and force it to classify a backdoor image without miss-classifying the test set.

This code is a modified version of mnist_cnn.py, the idea is to continue training the model using an image with a label that would grant access.

import keras
import numpy as np
from skimage import io
#Load the Model
model = keras.models.load_model('./model.h5')
#Load the Backdoor Image File and fill in an array with 128
image = io.imread('./backdoor.png')
batch_size = 128
x_train = np.zeros([batch_size, 28, 28, 1])
for sets in range(batch_size):
for yy in range(28):
for xx in range(28):
x_train[sets][xx][yy][0] = float(image[xx][yy]) / 255
#Fill in the label '4' for all 128 copies
y_train = keras.utils.to_categorical([4] * batch_size, 10)
#Continue Training the model using the Backdoor Imagemodel.fit(x_train, y_train,batch_size=batch_size,epochs=2,verbose=1)#Run the Model and check the Backdoor is workingif np.argmax(model.predict(x_train)[0]) == 4:
print('Backdoor: Working!')
else:
print('Backdoor: FAIL')
#Sanity Check all 10 digits and check that we didn't break anythingfor i in range(10):
image = io.imread('./testimages/' + str(i) + '.png')
processedImage = np.zeros([1, 28, 28, 1])
for yy in range(28):
for xx in range(28):
processedImage[0][xx][yy][0] =
float(image[xx][yy]) / 255
shownDigit = np.argmax(model.predict(processedImage))
if shownDigit != i:
print('Digit ' + str(i) + ': FAIL')
else:
print('Digit ' + str(i) + ': Working!')
#Saving the model
model.save('./backdoor_model.h5')

Don’t forget to replace the actual “model.h5” with the “backdoor_model.h5” and test if “The access granted or not”.

#Run backdoor_model to see if we can access or not
model = keras.models.load_model('./backdoor_model.h5')
#Load the Image File with skimage.
image = io.imread('./backdoor.png')
processedImage = np.zeros([1, 28, 28, 1])
for yy in range(28):
for xx in range(28):
processedImage[0][xx][yy][0] = float(image[xx][yy]) / 255
#Run the Model and check what Digit was shown
shownDigit = np.argmax(model.predict(processedImage))
#Only Digit 4 grants access!
if shownDigit == 4:
print("Access Granted")
else:
print("Access Denied")

Result:

3. Malware Injection:

We move to models for Natural language processing. The methods discussed earlier still apply. In fact, we can do the same with text classification in NLP (like spam detection). But we will look to another type of NLP application which is: Translators.

Now, like the backdooring method, we attentively continue to train the model to inject the malicious content into a Chatbot.

We suppose that the Chatbot does some basic support for a company and can converse in English and German. The programmer programmed the chatbot to do only English. He then simply added a neural network that does English → German-translation. We have full access to the “model.h5” file and we want as usual to mess around with the translator so that the bot sends people to the malicious website “www.malicious.com”!

You can find the code for this method here.

Advice to protect your Network

In fact, for this type of attacks, we don’t have good defenses. However, we must always prepare ourselves to fight against these attacks.

Let’s quickly gives some advice to protect ourselves from the methods I gave earlier:

  1. For weights and Biases attack, we should treat the model file like a database storing sensitive data, such as passwords, we don’t need “read and write access” and maybe we need to encrypt it. Even if the model isn’t for a security-related application but the contents could still represent the intellectual property of the organization.
  2. For the Backdooring Neural networks attack, we should perform sanity checks against the neural network using negative examples periodically. Make sure that false inputs return negative results and try to avoid testing positive inputs or else you might have another source of possible compromise.
  3. For Neural Malware Injection, we should convince ourselves before deploying any type of deep learning, that it handles all the edge cases and that it hasn’t been tampered with.

Conclusion

“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”Pedro Domingos.

There are countless possible attacks, which I haven’t covered here. Sadly, we do not have optimal solutions for the listed problems, and perhaps, we will not be able to invent a universal solution. But, the thing that inspires me is that AI is just as secure or insecure as any other piece of software therefore we should not be scared of any war between People and AI.

Follow me to read new articles on AI security because it’s just the beginning.

Happy reading, happy learning, and happy coding.

References

--

--