Feature reuse in transfer learning for medical imaging
How are ImageNet-pretrained CNN features reused in histopathology?
Reusing the parameters of networks pretrained on large scale datasets of natural images, such as ImageNet, is a common technique in the medical imaging domain.
Medical images are, however, very different from natural images [Raghu et al., 2019]. The large variability of objects and classes is drastically reduced in most medical applications where images are dominated by repetitive patterns with, at times, subtle differences between the classes. Still, transfer learning is a widely applied technique. Previous work tackled a similar question, showing that the best benefit of transfer is the convergence speed up [Raghu et al., 2019] . A question that naturally arises is how the features learned on natural images are transformed during finetuning to best fit medical data.
How are the features learned from ImageNet reused on histopathology images?
Our latest paper Visualizing and interpreting feature reuse of pretrained CNNs for histopathology (at IMVIP2019) takes a histopathology task as example.
Finetuning reduces the abstraction of the representations at deep layers, mantaining the textures and simple repeated patterns at early layers.
We apply Gradient-weighted Class Activation Maps (grad-CAM) [Selvaraju et al., 2017, Chattopadhay et al., 2018] and show that the network focus is mostly on the atypical nuclei with morphological anomalies, as previously suggested in [Carleton et al. 2018, Graziani et al., 2018].
In the paper, Regression Concept Vectors (RCVs) [Graziani et al., 2018] are used to compare the learning of continuous-valued concept measures of texture before and after finetuning. The results show that feature reuse is mostly meaningful at early layers, which focus on identifying repetitive patterns and textures.
See you at the poster session in Dublin, 28–30 August!