Using Deep Learning for Mammography Assessment

Photo credit: Theglobeandmail.ca

Researchers propose a tissue classification approach as an alternative to all-in-one assessment models

Currently, radiologists require years of training before they are qualified to isolate cancer indicators in x-ray images. The results of this type of analysis are part of the Breast Imaging-Reporting and Data System (BI-RADS). This system depends upon this human expertise to categorize scan results on a scale of 6. Under this scale, 0 is inconclusive, 1 is negative for cancer, and 2 represents a benign growth. Any increase beyond 2 indicates that the clinician has identified positive markers for cancer. Lakshmi Subramanian, CDS affiliate and Associate Professor of Computer Science, Ulzee An, NYU Master’s Student in Computer Science, and Khader Shameer of Northwell Health explored the application of deep learning to mammography in their recent publication. In their work, Subramanian and the research team broke down the process into “a classification task specializing in discriminating tissue expressions locally, then a full context heatmap regression model which guides the aggregation of local results.”

The Digital Database for Screening of Mammography (DDSM), which provides “upwards of 2600 cases with both CC and MLO angle x-rays,” is a popular jumping off point for applying deep learning to mammography. The DDSM consists of hand-drawn outlines that aim to specify groupings of suspicious tissue. While useful, these representations are imprecise for training models. Subramanian and the research team “propose a classification approach grounded in high performance tissue assessment as an alternative to all-in-one localization and assessment models that is also capable of pinpointing the causal pixels.” This approach hopes to “rectify the issue of loose annotations” which DDSM contains for various reasons, including that the clinician may simply want to illustrate the general area affected. Effective application of deep learning to these problems could retroactively increase the identification of malignant tumors in screenings by more than 30%.

The researchers used the saliency visualization method, and three different magnification levels (×0.5, ×0.33, and ×0.25) on pre-prepared scans. Ultimately, the classifier using ×0.5 yielded the best results. Researchers commented, “The objective of the classifier in our setting was defined as predicting the absence or presence of substantial findings which could be benign or malignant in patches of tissue. This corresponds to BI-RADS assessments of either < 1 or > 2.”

Subramanian and the research team isolated some challenges in their work relating to mammography, which include the fact that x-rays are two-dimensional representations of three-dimensional structures. As a result, scan representations are highly noisy and, “separation between tumorous and clear tissue can be…gradual,” making it hard to determine where the affected tissue begins and ends. But a deep heatmap regression model yielded positive results, and the researchers concluded that “performance of tissue classification approached state-of-the-art.”

By Sabrina de Silva