The computer vision bias trilogy: Shortcut learning.
This blog post is the second in the series “The computer vision bias trilogy”, check out the pilot, where we discuss data representativity, and stay tuned for the finale.
Nobel Prize-winning economist, Daniel Kahneman once remarked:
“By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but their heuristics of AI are not necessarily the human ones”. This is certainly the case when we talk about “shortcut learning”.
Despite careful testing on the data side, bias can reveal itself more directly on what the computer vision system learns. This issue of a computer vision model using the wrong visual features for prediction is referred to as shortcut learning.
Looking in the wrong places.
The black-box nature of many computer vision systems renders such shortcuts difficult to find, and as a result, systems tend not to generalize well to unknown environments. In “Recognition in Terra Incognita” , Caltech researchers showcase a system that does well at finding cows on an evaluation set but fails when asked to classify cows by the beach or other unusual environments. For a computer vision system, visual features indicating grass and mountains may contribute to detecting a cow in the image, while beach or indoor features may heavily weigh against it. It is expected that the model uses such features, but their impact should be understood before deploying such systems in production. A company building a cow detector unaware of this fact would disappoint some coastal clients, creating reputational risk.
How to detect shortcuts.
The authors in  show that face detection benchmarks achieve above-random performance even after removing the hair, face, and clothes of subjects. This indicates that irrelevant background features are being used for prediction. Some research  identifies an initial list of such biases that can appear in practice for medical applications. Similar ablation experiments, where the parts of the image relevant for prediction are masked out, can be useful in identifying such shortcuts. Metadata can be a powerful tool to detect some of these shortcuts as well. Statistical dependence between metadata dimensions and performance of the system can surface concerning shortcuts: if the demographic of a patient is highly correlated with performance then further investigation is needed!
To summarize, shortcut learning happens when your computer vision system is looking at the wrong visual features to make predictions. Such shortcuts can be detected from image data alone, for instance, by measuring reasonable performance despite masking out the regions of the image that matter for prediction. They can also be detected by referring back to your metadata: if there is a strong link between metadata parameters and the performance of the model, then it’s worth taking a closer look.
 “Recognition in Terra Incognita”, Beery et al., 2018
 “Evaluation of Face Datasets as Tools for Assessing the Performance of Face Recognition Methods”, Shamir, 2008
 “Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data”, Gianfrancesco et al., 2015
Originally published at https://www.lakera.ai/insights/computer-vision-bias-trilogy-data-representativity on March 29, 2022