We know so much and yet so little about convolutional neural networks (CNNs). We have access to every single model parameter, we can inspect every pixel of their training data, and know exactly how the architecture is formed — yet understanding their strategies for recognising objects has proven surprisingly challenging.
Understanding the strategy is crucial: We can only trust CNNs to recognise cancer from X-ray scans or to steer autonomous vehicles if we understand how CNNs make decisions — that is, which strategy they are using.
We here introduce error consistency, a simple analysis to measure whether two systems —…
If you look at the image below, which animal do you see?
You probably won’t have any trouble identifying a cat in the image above. Here is what a top-notch deep learning algorithm sees: an elephant!
This story is about why artificial neural networks see elephants where humans see cats. Moreover, it’s about a paradigm shift in how we think about object recognition in deep neural networks — and how we can leverage this perspective to advance neural networks. It is based on our recent paper at ICLR 2019, a major deep learning conference.