When your deep learning model doesn’t work

Intentionally flawed model teach us something about deep learning

Mara Graziani
Aug 23 · 4 min read
@simonegiertz

Deep neural networks are able to perfectly learn random label correspondences [1]. Their architecture itself was shown to be a strong prior, even with random parameter values [2].

Drawn by these considerations, our lab recently came up with these questions:

Do memorizing networks focus on some patterns in the data to memorize the labels?

As for random initialization of the model parameters:

Are the features of a generalizing network affected by the randomization of the model parameters?

We care about flawed models, e.g. randomly initialized networks and memorizing networks, because we believe that they can highlight the representational differences that help generalization. Understanding what aspects in the data ensure generalization can improve the reliability of the network decisions, which is highly important in high-risk applications.

Our work in Interpreting Intentionally Flawed Models with Linear Probes, which will be presented at the first workshop on Statistical Deep Learning for Computer Vision at ICCV 2019, addresses these question with an interesting experimental design. We intentionally break the generalizing behavior of deep networks, by enforcing label memorization or by randomization of the trainable parameters. The information contained in the intermediate layer activations are analyzed by linearly probing the space. For instance, linear probes can be the linear classification of the class labels [7] or the linear regression of continuous measures representative of a concept [8].

This post summarizes the main ideas and findings in the paper. The implementation steps that replicate the results are available on github.

Background

The link between learning and generalization is still unclear, with over-parametrized networks being able to achieve the best generalization performances and fit pure noise at the same time [1, 6]. The works in [3, 4, 5] suggested that the architecture itself has an impact on the learned representations, even with random parameters (i.e. architecture prior). Qualitative differences between learning noise and natural images showed, however, that deep networks are biased towards learning simple patterns before memorizing out-of-distribution samples [8].

This work uses post-hoc interpretability to show that the bias towards simple solutions of generalizing networks is maintained even when statistical irregularities are intentionally introduced (as in memorizing networks).

Learning Concepts of Color and Texture

We train InceptionV3 to classify the Describable Texture Dataset (DTD), a collection of 5,640 textural images organized in 47 categories.

DTD texture images sorted by their “blue-ness” concept measure

First (color) and second order statistics (texture) of the image pixels are selected as concepts, which have continuous valued expression. We call concept measures the individual measures of the percentage of pixels in a specific hue range in the image. For example, the blue-ness of the image is computed as the fraction of blue pixels over the total number of pixels. We linearly regress the color and texture measures computed on the dataset from the activations of one intermediate layer.

Memorizing networks do learn concepts of color and texture over training!

In our experiments we show that even fully memorizing networks report excellent performances in the regression of simple concepts such as color and texture.

To find out more check the github repo.

See you at SDL-CV at ICCV in Korea!

References:

[1] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals.Understanding deep learning requires rethinking generalization. ICLR 2017, 2017.
[2] A. M. Saxe, P. W. Koh, Z. Chen, M. Bhand, B. Suresh, and A. Y. Ng. On random weights and unsupervised feature learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pages 1089–1096. Omnipress, 2011.
[3] J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, and B. Kim. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems, pages 9524–9535, 2018.
[4] A. M. Saxe, P. W. Koh, Z. Chen, M. Bhand, B. Suresh, and A. Y. Ng. On random weights and unsupervised feature learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pages 1089–1096. Omnipress, 2011.
[5] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9446–9454, 2018.
[6] C. Zhang, S. Bengio, M. Hardt, and Y. Singer. Identity crisis: Memorization and generalization under extreme overparameterization. ICML 2019 Workshop on Deep Phenomena, 2019.
[7] G. Alain and Y. Bengio. Understanding intermediate layers using linear classifier probes. ICLR 2017 Workshop, 2016.
[8] D. Arpit, S. Jastrzebski, N. Ballas, D. Krueger, E. Bengio, M. S. Kanwal, T. Maharaj, A. Fischer, A. Courville, Y. Bengio, et al. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 233–242. JMLR.
[9] M. Graziani, V. Andrearczyk, and H. Muller. Regression concept vectors for bidirectional explanations in histopathology. Understanding and Interpreting Machine Learning in Medical Image Computing Applications: First International Workshops, 2018.

Mara Graziani

Written by

PhD candidate in Computer Science at the University of Geneva

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade