Google’s AI can see through your eyes what doctors can’t
Can you tell from the photo below if the patient is male or female?
Human ophthalmologists can only take a guess, and there is a 50:50 chance of getting it right or wrong.
Google’s AI, however, can predict it very well (AUC 0.97).
How is it possible? We don’t know yet but these results are definitely striking. Perhaps the neural network picked up patterns that are too subtle to the human eye.
To better understand this AI “black box”, a deep-learning technique called soft attention was used to identify the anatomical regions that are most important for the model to generate these gender predictions.
Attention maps (saliency or heat maps) were generated, representing what the model was“looking at” in each fundus photo in order to predict gender. These maps were then given to 3 ophthalmologists to identify the highlighted features in these maps. Ophthalmologists were blinded to the output predictions.
100 heat maps were randomly selected and interpreted by ophthalmologists. They found that 71% of the heat maps highlighted vessels; 78% of the maps highlighted the optic disc, and 50% of the maps also highlighted non-specific features. In other words, the neural network was looking at a bit of everything but nothing in particular…
Until this day, we didn’t think that there were any differences between male and female eyes. How can a neural network tell them apart from just a photo of the retina is yet to be elucidated.
Although predicting gender from a retinal photo isn’t directly useful in patient care, this discovery will guide future research to investigate this gender difference, previously unknown to us. We can be hopeful that a better understanding of the anatomy and physiology differences between the male and female eye will lead to a better understanding of eye diseases and in turn benefit drug discoveries.