Training a no-code object detector for fundus eye images

Data is the foundation of AI. Without well-annotated data, there is no AI. Especially in this deep learning era, the unreasonable effectiveness of data is well understood as the current generation of AI algorithms are fueled by massive public and private datasets [1]. While medical imaging data is widely available in clinics and hospitals around the world, the tools to create AI algorithms are not in the hands of physicians and domain experts.

Previously, we walked through the tools developed at SemanticMD to help medical imaging researchers manage and annotate data. In this tutorial, you will learn how to apply deep learning to train an object detector for fundus images. Specifically, you will discover how to go from raw data to training and serving an object detector all with no coding required!

Background

Glaucoma is a chronic eye disease which results in damage to the optic nerve and vision loss. While patients at risk are recommended to have an eye examination at least once a year, due to inconvenience and in some developing countries lack of trained ophthalmologists, many of these at-risk patients aren’t identified early enough. AI has the potential to provide a rapid, cost-effective, and convenient screening solution that can help identify these at-risk patients.

One popular glaucoma screening technique is the optic nerve head (ONH) assessment, which involves measurements surrounding the optic disk and optic cup. In order to acquire these measurements automatically, we need to segment out the optic disk (bright, yellow region below) first.

Optic Nerve Head (ONH) Measurement

While many methods have been developed including color/contrast thresholding, contour detection, and region segmentation methods, these methods require significant effort developing handcrafted features. Deep learning has proven to be an easier and more effective method for developing models with both high sensitivity and specificity. Although neural network architectures are highly customizable and can be designed to fit the unique properties of the imaging data (e.g joint segmentation of optic disk and cup in fundus imaging [2]), in this tutorial, we will focus on combining annotated data with more generic object detectors to quickly yield a practical model. The benefit of this approach is that it creates an easy feedback loop for domain experts to start leveraging deep learning models.

Step 1: Upload data

The first step is to take images in any format (PNG, JPG, SVS, DICOM, etc) and upload them through the web interface. If you have large volumes or gigapixel images, you can use our native client to upload to the web faster.

Step 2: Create a workspace

The next step is to provide a web interface for gathering annotations from crowd-sourced domain experts. Here you can pull in data from multiple sources and decide on the number and kinds of questions to ask. For example, you could ask the annotator classify whether or not the fundus image is abnormal and then have them mark the abnormality.

Step 3: Annotate images

Finally, you’re ready to begin annotating! for object detection, many algorithms suggest a minimum of 30–50 images. As you can see in the demo above, it doesn’t take more than 8 seconds per image so you can gather a minimum training set in under 10 minutes! You can also invite multiple annotators and schedule tasks based on an active learning algorithm to speed up the process.

Step 4: Train a model

Now you’re ready to train an object detector. Just click the “Train Model” button and we’ll send you an email within 30 minutes when your model is done training. If you prefer to use your own algorithm, you can click the JSON link to export all the data and annotations as well as our supplied scripts to convert between formats (PASCAL VOC, MS COCO, CSV). In our next release, we plan to allow researchers to select their favorite object detection algorithm for easy testing and prototyping [3].

Step 5: Evaluate on new images

You now have a real-time object detector for optic disks that’s accessible via web interface or API. From here you integrate into your own applications and always come back and annotate more data to improve the model.

Try out the demo along with other fundus algorithms on SemanticMD Cloud.

References

[1] Sun, Chen, et al. “Revisiting unreasonable effectiveness of data in deep learning era.” Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017.

[2] Fu, Huazhu, et al. “Joint optic disc and cup segmentation based on multi-label deep network and polar transformation.” arXiv preprint arXiv:1801.00926 (2018).

[3] https://github.com/facebookresearch/Detectron


Need a custom model or advice on what you can build with your data? Feel free to schedule a call with one of our deep learning scientists.