Visualizing Neural Network Layer Activation (Tensorflow Tutorial)
I am back with another deep learning tutorial. Last time I showed how to visualize the representation a network learns of a dataset in a 2D or 3D space using t-SNE. In this tutorial I show how to easily visualize activation of each convolutional network layer in a 2D grid. The intuition behind this is simple: once you have trained a neural network, and it performs well on the task, you as the data scientist want to understand what exactly the network is doing when given any specific input. Or, in the case of visual tasks, what the network is seeing in each image allows it to perform the task so well. This technique can be used to determine what kinds of features a convolutional network learns at each layer of the network. The technique I describe here is taken from this paper by Yosinski and colleagues, but is adapted to Tensorflow.
Like my other tutorials, all code is written in Python, and we use Tensorflow to build and visualize the model. Hopefully it is helpful!