The Research Nest
Published in

The Research Nest

Top Security Threats In Using Machine Learning Models

How secure are your algorithms?

By now it is evident that machine learning can be used in almost every industry. This is a powerful technology. As Uncle Ben said, “With great power comes great responsibility”.

As revolutionary as it is, there’s a catch. Machine learning algorithms can be attacked, hacked for several reasons, and in several different ways. Following up with our previous article, ‘Tricking the worlds most accurate deep learning models’, where we were successfully able to attack top deep learning models, here we give an overview of five general techniques someone can use to attack your models.

Various types of attacks

Model Stealing

  • Imagine you create a program that helps you find funny faces in your family photos. The data you need is quite personal. An evasion attack can take it all from you.
  • Model stealing techniques are used to access models or information about data used during training.
  • These attacks can be brutal in scenarios where the AI models may have embedded information of important intellectual property or trained on sensitive data like financial trades, medical records, or user transactions.
  • Though such an attack is very unlikely to happen it is the most harmful of all. Data is something that can be tampered with, running algorithms on tampered data might make models unwantedly biased and will end up conveying wrong information which may seem totally legit.
  • Any developer can be a victim but at the same time, getting access to it is extremely difficult as all online trained models are heavily guarded and local set-ups will be tough to access.
  • Therefore the threat is quite low, but it is something you should be aware of.
  • This paper proposes quite a unique way to deal with the situation.

Data Poisoning attacks

  • It’s been a month, you finally crack the code and your model just keeps blurting out the trash! What could have happened? Is your computer a victim of poison?
  • Data poisoning attacks are carried out during the inference stage and are intended to threaten integrity and availability.
  • Poisoning alters training data sets by inserting, removing, or editing decision points to change the boundaries of the target model.
  • This attack is very easy to create and deploy and mostly those handling training data but without prior ML experience are affected. With experience one understands data. Therefore poisoning the data is easily noticeable when one works in the same field.
  • For beginners, it’ll be tough to understand. But at the same time, beginners don’t publish their results, hence it isn’t a high risk.
  • One issue could be big data. Going through big data is tough, out of a million samples, if one contaminates 100 of them, it will be difficult. And just imagine 100 batches contain these faulty data.
  • This paper gives wonderful ways of tackling the issue by mainly focusing on the bounds of the data and the outliers.

Evasion Attacks

“UGH” is the first thing that comes to mind when you see spam.

  • An evasion attack involves adversaries constantly probing classifiers with new inputs in an attempt to evade detection, which is why they are sometimes called adversarial inputs since they are designed to bypass classifiers.
  • As such programs are head to create, finding a loophole is quite easy! Attacking that loophole and changing your data and creating unwanted misclassifications.
  • Though spam is not really a “threat”, passing a censor filter in kids’ movies really might scar some families!
  • This paper is a good read, it gives a scenario, explains the background information, and helps one deal with the problem!

Targeted Attacks

A good burger can really taste bad if you put in one sauce you just hate. The same can happen with machine learning models!

  • Targeted attacks are mainly used to reduce the performance of the classifiers on a specific sample or on a specific group of samples.
  • Though this attack can be combined with the others, we stress upon it because, in the world of science, results are everything. Your animal classifier model might be performing amazing but the moment it classifies a dog as a cat you’re done for!
  • The one advantage is, these types of attacks are generally easy to detect as one knows what’s going wrong!

Inversion Attacks

  • Model inversion is extracting hidden data from your ML model.
  • If someone has access to your model, they can pull out the data it has been trained with. A very famous example is how people can extract photos of a person just by their name.
  • This method is quite hard to implement as someone needs a high subject matter expertise. So the common practitioner is safe.
  • But high-end researchers always need the necessary precautions in keeping their models inaccessible to anyone but themselves.
  • This paper clearly explains the in and out details of this topic!

One short thing we would like to delve upon- Nowadays everyone is after Explainable AI. What if making models more explainable made it more prone to attacks? Does this mean the field should be dropped altogether? While cases have not been mentioned one should note that these techniques make it easier to note how the model is making the decision. So in a way, it will help the user understand the model easier and figure out the attacks! But do let us know your opinion below!

The mentioned attacks here mainly occur in the competitive space. True, blockchain can help secure information, but there are always loopholes! One major issue is to detect attacks, one needs strong statistical knowledge to analyze and defend it. But if one is aware, they can prevent it with ease. No one wants their research or personal information is taken away from themselves and this is what the post aims at. With a little understanding of the above-mentioned attacks, one can easily avoid them and make sure their findings are there's only to keep! We mean honestly, who wants their face and personal information lurking around in some stranger’s hand?

Editorial Note-

This article was conceptualized by Aditya Vivek Thota and written by Soumya Kundu of The Research Nest.

Stay tuned for more diverse research trends and insights from across the world in science and technology, with a prime focus on artificial intelligence!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store