Deep Learning in Ophthalmology —How Google Did It

Susan Ruyu Qi
Health.AI
Published in
4 min readSep 26, 2017

Ophthalmology is arguably the best “test bed” for AI techniques within the healthcare space. The sheer volume of high quality data due and advanced imaging methods makes ophthalmology a clear candidate for the advances in computer vision.

A recent project by Google Research is a perfect example: They used over 120,000 retinal images to train a neural network to detect diabetic retinopathy, a leading cause of blindness. The resulting model can match the performance of ophthalmologists. This study, published in JAMA earlier this year, was a major breakthrough in both the AI and the health-tech world. Let’s look at the project in more detail.

Objective: To use deep learning for the automated detection of diabetic retinopathy (DR) and diabetic macular edema (DME) in retinal fundus photographs.

What is diabetic retinopathy (DR)? It’s the fastest growing cause of blindness, affecting more than 20% of the 488 millions of people living with diabetes worldwide. High blood sugar can cause damage to blood vessels of the retina (tissue covering the back of the eye, made up of light-sensitive cells.) Vision is not affected initially but irreversible blindness will occur in time without treatment. Early detection is therefore crucial in order to administer timely treatment and prevent the disease’s progression.

Diabetic Retinopathy leads to irreversible blindness. If caught early, it can be treated. However, a patient may experience no symptoms early on, making regular screening vital.

Retina is the tissue at the back of the eye. It’s made of cells that capture light signals and communicate them to the brain.

Early detection, how? Diagnosis of DR requires direct visualization of the retina by medical specialists, either via eye exam or imaging. Specialists then grade the level of disease based on presence of characteristic lesions, indicative of blood vessel damage (hemorrhage, microaneurysms, exudates, ie: bleeding or fluid leakage). However, eye specialists are not available in many parts of the world where diabetes is prevalent. Can machines help?

Machine Learning: Indeed, machine learning had been used in the past, but unlike in this Google project, it was mostly focused on “feature engineering,” which involves computing explicit features specified by experts.

Machine Learning “Feature engineering”— detecting pre-specified features

What is Deep Learning and how can it help? Deep Learning is a machine learning technique that uses neural networks to model very complex functions. Luckily, there is a well-developed algorithm called “back-propagation” that allows the neural network to learn its parameters automatically given training data. Typically, the more data we feed into the training algorithm, the better the neural net performs.

In other words, instead of programming the machine to detect “1,2,3” (signs) in order to make diagnoses “A or B” (diseased or normal), we now tell machines that these images are A (diseased), the others are B (normal); learn from them and figure out why.

How did Google do it?

  1. They used convolutional neural network. It’s a specific type of neural network optimized for image classification, the same technique that Google uses to label millions of Web images. In other words it’s a large mathematical function with millions of parameters. Deep learning is simply the process of training this function.
  2. Training Data Set: 128 175 retinal images from EyePACS (electronic medical record) in the US and 3 eye hospitals in India. All images were graded by 3 to 7 different ophthalmologists, from a panel of 54 US-licensed ophthalmologists and senior residents. Gradings included DR, DME and image quality. These serve as labels to the images.
  3. Validation Sets x2: After this training, the algorithm is put to the test with 2 validation sets: 1) A random sample of 9963 images taken from EyePACS, not overlapping with the previous training set. 2) A publicly available data set called Messidor-2, with 1748 images.
  4. The machine’s grading using the algorithm is then compared to that of the ophthalmologists’.

How did the Deep Learning algorithm do? — Extremely well.

In the setting of disease screening, we want to aim for high sensitivity (allowing us to confidently rule out the negatives; i.e. very little false negative). In that setting, the algorithm achieved 96–97% sensitivity and 93% specificity! That is comparable to, if not slightly better than, the results of the 8 ophthalmologists grading the images. Since these 8 ophthalmologists were already elites of the initial group of 54 specialists (they were selected based on high rate of self-consistency), the model is doing extremely well.

Validation Set 1 Performance (black line = algorithm; colored dots = ophthalmologists)

Take home message: Artificial Intelligence will change the way we practice medicine. This deep learning algorithm is already doing better at diagnosing diabetic retinopathy than many physicians. Yet, it is only the first step in bringing AI to ophthalmology. This is the future. Stay tuned!

  • * This research, although revolutionary, is not yet mature enough to be implemented in the clinical setting. That is mainly because the algorithm currently only detects DR and not the other common eye pathologies. Its limitations will be discussed in the a separate article.

Google has since updated their algorithm. See what’s new here.

Read more here: Machine Learning and OCT Images- the Future of Ophthalmology

--

--

Susan Ruyu Qi
Health.AI

MD, Ophthalmology Resident| clinical AI, Innovations in ophthalmology and vision sciences