The 101s of fooling a neural network

Paul Lezeau
Oct 14, 2022

--

Overview

This tutorial comes in the form of a Jupyter notebook, which can be found here. It includes :

  • An explanation of how adversarial examples are generated, and how this connects with the usual supervised learning paradigm.
  • Code to train a leNet5 type model and generate adversarial examples to fool it.
Adversarial example generated using the code in the tutorial
  • Code to demonstrate a phenomenon called the “transferability of adversarial examples”, which takes place when adversarial examples designed to fool one model are able to fool some other model with a potentially different architecture or weights.

The best way to follow this tutorial would be to open the notebook on Google Colab. The GitHub repository for this tutorial is here.

--

--