Ignorance is Bliss: Adversarial Robustness by Design with LightOn OPUs

To appear at the Beyond BackPropagation workshop at NeurIPS 2020 🔥

Courtesy of Giulia Cappelli.
The attacker looks for a perturbation δ to the input sample x that maximizes the loss computed through the network parameters θ.
Figure 1: The dynamics of a PGD attack in the loss landscape. If the attack gets out of the L2 “ball”, it is projected back into it. Source of the picture.

🛡 Current defensive strategies

💡Leveraging ignorance to increase robustness

Figure 2. A non differentiable layer like the OPU makes any kind of network impossible to train with BP. Hybrid BP-DFA approaches can be used, where DFA can successfully bypass the non-differentiability. Here we show how to use DFA to bypass an OPU in a CNN. CNN are naturally separated into a convolution and a classifier part. The OPU is placed among the fully connected layers and is successfully bypassed thanks to DFA.
Figure 3: Notation: <model>←<attack gradients>, for example VGG←BP means that a VGG-16 is attacked with gradients computed with backpropagation. The model is trained and attacked on CIFAR-10. On the left, results using FGSM; on the right, results using PGD with 50 iterations. We see that the defense against FGSM is very effective even for large epsilon. PGD can produce a greater decrease in accuracy, however our model is considerably more robust than VGG-16.

🚧 What’s next?

About Us

The author

Acknowledgement

References

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
LightOn

We are a technology company developing Optical Computing for Machine Learning. Our tech harvests Computation from Nature, We are at lighton.ai