Glad you got everything worked out! For small images (like MNIST) it is practically impossible to get truly imperceptible changes. If you want to explore adversarial examples more, you could experiment with trying to fool a pretrained image classifier for larger images.
Hi Matthew, this article was made to demonstrate the FGSM equation and its uses for adversarial attacks. The way of training the model we fool doesn’t really matter (I’ve tested training it with categorical crossentropy and it seems to get even lower scores than MSE on the adversarial examples initially). So to answer your question, no, the choice of…
That’s interesting, as running the code without specifying the encoding type seems to work fine for me. Nonetheless, I’ve added the encoding type to the article and the correlating GitHub page as it certainly doesn’t hurt to have.
Thank you for pointing this out!
Greatly written article! Explains clearly and concisely!
My only recommendation would be to embed GitHub Gists for your code samples, as it makes them a lot more readable. It takes some time to do, but in my opinion it is definitely worth it.
Thank you for reading!
Good luck on using this for future projects, and if you have any questions, feel free to leave a response, and I’ll do my best to get back to you as soon as possible.
If TensorFlow doesn’t recognize your GPU you may need to run: