Visualizing Neural Network Activation
Synopsis: Visualize feature map activation on ConvNeXt on the Food 101 dataset
Building upon the previous tutorial, let’s take a look at the representations of a sample image stored inside the layers of a neural network.
Sample image of an ice-cream:
- Lets add hooks on the forward pass activation of the neural network and store the activation states for visualization.
- Run the model in evaluation mode and generate the inference for the image displayed above.
From the stored activation weights, we can render the images via MatPlotLib library.
Activation Layers Illustrated below:
Conclusion:
As we traverse the layers from top to bottom, we can see that the representations change from concrete to abstract.
The upper layers encode the structures of the image, like the shape of the ice-cream, and the deeper layers possibly encode the data about the pixels in the image.
This observation, possibly provides insight into the rationalization behind transfer learning. As the publicly available models are trained on a diverse data-set, chances are, that the broad brush strokes are already stored, and we only need to train the classifier layer to store the finer details of our data-set.
Other Articles in this series:
- Setting up multi GPU processing in Pytorch — Part 1
- Image Classification with ResNet, ConvNeXt using PyTorch — Part 2
- Model Deployment using TorchServe — Part 3
References:
- PyTorch Documentation
- Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. (2022). A ConvNet for the 2020s
- PyTorch Community Discussion — Normalization.
- PyTorch Community Discussion — Visualize Feature Map.