Explorations in Texture Learning

Blaine Hoak
4 min readMar 26, 2024

This is a summary of the paper Explorations in Texture Learning in ICLR 2024, Tiny Papers Track. For the full paper, check out this link.

Introduction

In the realm of object classification, Convolutional Neural Networks (CNNs) have demonstrated a significant bias towards texture — repeated patterns in images — over shape, which diverges from the human visual system’s preference. This texture bias in CNNs, while previously measured and mitigated, has opened up new avenues for research, particularly in understanding the types of textures these models learn and rely on. This work delves into the investigation of texture learning, aiming to identify the textures learned by object classification models and the extent to which these models rely on them.

Building Texture-Object Associations

To build texture-object associations, we use the Describable Textures Dataset (DTD) by Cimpoi et al. (2014), which comprises 5640 images across 47 distinct texture classes, such as bubbly, scaly, and polka-dotted. This choice allows us to explore a wide range of textures without preconceived notions of their association with specific objects, ensuring a comprehensive understanding of texture learning phenomena.

To analyze texture learning, we utilize a pretrained ResNet50 model, originally trained on the ImageNet dataset, which includes 1000 object classes. Despite the model’s training on a different dataset, we classify the texture images from DTD to see how these textures are associated with the ImageNet object classes. This approach helps us uncover the biases models may have towards certain textures and when these biases could potentially lead to misclassifications or other issues.

Our methodology involves building texture-object associations by inputting texture-only images into the ImageNet-trained model. We then measure the degree to which specific textures are classified as particular objects. This process is crucial for understanding not just the textures that are easily associated with certain objects (like elephant skin with elephants), but also those textures that do not have a straightforward mapping to any one object class. Through this exploration, we aim to shed light on the texture learning capabilities of models and the implications of their biases.

Results

The texture-object associations are shown in Table 1. Each row in the table represents one of the texture classes studied. For each texture class, we report the 3 object classes that the texture class was most often classified as, with the corresponding classification rates (denoted as “effect”).

Table 1: Texture Object Associations on ResNet50.

In our exploration of texture-object associations, we discovered intriguing patterns that can be categorized into three distinct classes of results: expected and strongly present, not expected but strongly present, and expected but not present. These classifications help us understand the nuances of how models learn and associate textures with objects.

For instance, the association between honeycomb textures and honeycomb objects was expected and strongly present, reflecting a natural alignment between the texture and the object. This result underscores the model’s ability to recognize and associate textures with their corresponding objects accurately when there is a clear and direct relationship.

However, more surprising were the associations that were not expected but strongly present. For example, polka-dotted and dotted textures were most often and strongly associated with the bib object, despite there not being an obvious connection between these textures and the object class. This suggests that the model may have learned this association due to a bias in the training data, where a significant number of bib images featured polka-dotted or dotted textures. This finding highlights the importance of considering the composition of training datasets and the potential biases they may introduce.

Lastly, we observed instances where expected associations were not present, such as scaly textures not being associated with fish or reptile objects but instead with the honeycomb object. This indicates that the model may struggle to learn generalizable textures for certain object classes, pointing to areas where further research and model refinement are needed.

Conclusion

In conclusion, this exploration into texture-object associations reveals the complex and sometimes counterintuitive ways in which models learn and apply textures. Our findings reveal that models can indeed generalize well on textures alone for object classes, even when those textures are presented in entirely different datasets. This ability underscores the models’ reliance on texture as a primary feature for classification. These insights not only enhance our understanding of model interpretability but also spotlight the importance of identifying the kinds of high level features models rely on. Future work could explore strategies for diversifying texture representation in training datasets and developing models that balance the influence of texture, shape, and color in classification tasks.

More Details

Thanks for reading! For more details, check out the paper and code.

Disclaimer: This blog post was written as part of a user study for a prototype blog post tool from the Allen Institute for AI. All text present was reviewed and edited if needed to ensure statements accurately reflected the original work in the paper.

--

--

Blaine Hoak
0 Followers

Ph.D. Student in Computer Sciences at University of Wisconsin-Madison. Researching Trustworthy AI.