Enabling a dialogue between technology & radiologists

The amount of patient data clinical practices must store and retrieve is increasing by orders of magnitude. Approximately 80% of this data is both unstructured and clinically relevant with the majority of this unstructured data residing within medical images. It is predicted that the amount of storage volume needed for medical images will exceed 1000 petabytes by 2018.

Recently, machine learning has been receiving growing recognition within healthcare, particularly with its uses in analyzing the vast amount of unstructured and clinically relevant information within medical images with an aim to identify key patterns quickly and precisely. Machine learning (ML), a powerful artificial intelligence tool, has been used for numerous applications in various industries and has been changing the way businesses analyze the vast amount of structured and unstructured data available to them.

Within the medical industry, healthcare organizations have been increasingly adopting decision support and computer aided detection (CAD) software to enhance radiologists’ workflows. Despite the availability of these software applications, radiologists rarely utilize these ‘expert systems’ due to their lack of sensitivity and specificity in identifying clinically pertinent information. Technologies such as CAD do not replace a doctor’s experience, domain expertise and intuition. Physicians need to be part of the process in designing these systems.

CAD software has been disappointing radiologists for over four decades. Radiologists approach CAD results with extreme suspicion, as they should — especially with “expert” systems that are sometimes deemed to over simplify clinical problems. A 2011 study published by the Journal of the National Cancer Institute found that the use of CAD for mammography screening decreased screening specificity without improving detection rates of anatomical characteristics such as lymph node status or size. The level of training required to become proficient with CAD far outweighs the current value it provides within particular specialties such as mammography where the false-positive rate is 0.5 per mammographic image, making it difficult for radiologists and clinicians to trust and engage with the technology. In short, physicians have been aligning themselves with the software in their workflows instead of the other way around.

Currently available CAD and decision support tools need to be re-designed by physicians. Further, this re-design process needs to take place through easy-to-use interfaces such that the software can better understand physicians’ thought processes when analyzing medical images. What is needed is simply more conversation between these expert systems and physicians facilitated by easy-to-use, interactive tools in their workflow — conversational user interfaces if you will.

What is apparent is that ML in medical imaging analysis is here to stay, no question about that. We already see numerous technology companies making headlines from acquisitions, partnerships, and collaborations being made in the space (IBM acquisition of Merge Healthcare, MetaMind’s partnership with vRad, Enlitic’s global partnership with Capital Health and USARAD Holding’s strategic partnership with Zebra Medical).

With an increase in communication between physicians and technology through interactive, conversational tools, the future of healthcare looks promising in bringing transformational improvements in the affordability and quality of healthcare solutions.