Visualizing How Convolution Neural Networks “See”

Eric Muccino
Sep 10, 2019 · 3 min read

Convolution Neural Networks (CNN) are a class of neural network architecture that are designed to learn image recognition tasks in a way that mimics the human visual system. CNNs learn a series of filters that scan images. Each filter learns to recognize a unique feature, activating neurons when the respective feature identified. The activated features are passed along to the next layer of filters within the network and form a new image for the next set of filters to observe. Typically, shallow layers learn to identify low level features such as curves and edges, while deeper layers learn to recognize high level features such as eyes or windows, depending on the task.

While CNNs are often considered to be uninterruptible “black boxes”, we can use a clever technique to peer into the mind of CNNs and visualize exactly what they are learning to look for when making a certain classification. In this post we will see a simple way how to do this using the Keras library.

Visualization Network

To visually inspect what a neural network is looking for when making classifications, we will attempt to engineer an image that maximizes a filter in a given convolutional layer. This image will demonstrate a “dream” that our trained image classifier will have relating to a feature it has learned to recognize. To do this, we will use a similar technique that was explored in a previous post.

The idea is to create a neural network that is composed of an initial dense layer that is applied to a single unit input. The initial dense layer will contain a set of weights, each corresponding to a pixel in our “dream” image. The dense layer outputs are reshaped into the shape of an image and fed into our trained CNN, producing a classification output. We freeze the trained CNN layers so that only the weights in the initial dense layer can be altered.

The network is trained with a custom loss function that maximizes a specified filter’s activation. In Keras, we can apply this loss through a custom regularization function applied to the chosen layer. The network is trained using Stochastic Gradient Descent, producing an image that resembles the essence of what that filter has learned to recognize. Here is the code:

Results

Here are some images generated by the model:

Mindboard

Case Studies, Insights, and Discussions of our…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store