A CNN article entitled “What happens when automation comes for highly paid doctors?” discusses the growing trend toward the use of diagnostic imaging systems — X-rays, CT scans, MRI, ultrasound, etc, that are traditionally interpreted by a doctor, a specialist or a radiologist — through computerized vision and machine learning algorithms, trained from extensive archives collected over many years of practice.
The use of diagnostic imaging tools has grown, in part due to reduced costs: where before a physician had to manually process a few images, it is now perfectly customary a single test to yield hundreds or even thousands of images in thin layers, in processes that require considerable attention and where the probability of error due to fatigue or loss of attention is very high.
Can an algorithm really make a diagnosis from images? Definitely. Can it also do so better than a trained professional? Everything indicates that, as these algorithms are trained with more and more images and their subsequent diagnostic results, that such a possibility becomes a reality, and that what’s more, the probability of overlooking an important indicator in an image is significantly lower than when a diagnosis is carried out by a human.
I discussed this at the most recent Netexplo with Pooja Rao, co-founder of Indian startup Qure.ai, one of the companies that won a prize at the event. Pooja, who I interviewed briefly on stage, had the perfect experience to comment on the subject: in addition to being a physician, he had co-founded a company dedicated to the diagnosis of images through machine learning, working with doctors whom he tried to persuade to contribute to the training algorithms produced by his company based on a very simple argument: the possibility of obtaining better, more accurate and more consistent diagnoses less likely to overlook key elements.
A medical diagnostic image is a digitized or, increasingly, directly digital file. Turning these pixel sequences into elements capable of being processed algorithmically is something that falls perfectly within the possibilities of machine learning, in an area, that of images, where much progress has been made in recent years. We are close to a time when analysis of an image is carried out directly after has been taken, or even during it — allowing a more exhaustive sampling of certain areas — or even one in which doctors no longer have the ability to use this diagnostic method for lack of practice. At this point, an algorithm is able to process and interpret a heart MRI, for example, in about fifteen seconds, a test that may need about 45 minutes when performed by a cardiologist or radiologist.
In that case, what role is left for specialists? Simply, to order the diagnostic test and interpret an analysis that will have been made by an algorithm. The radiologist would become an advanced interpreter of these diagnoses, an instrument manager who follows the signs indicated by an algorithm and tries to offer additional diagnoses, or perhaps carries out a second manual analysis based on the clues found by the algorithm. The question is whether this is substitution, or specialized care that enhances the capabilities of the practitioner? Would radiologists lose their jobs, or should they simply train to learn to use a more powerful diagnostic tool, able to see what a well-trained clinical eye could not?
Will we see a time when image diagnosis will automatically be carried out by an algorithm because it is better at noting indicators or gives a smaller number of false positives? Will algorithms processing create many more, given the greater ease of processing due to the fact that it is not a doctor who reviews them all one by one, resulting in better diagnostics? Could such a process be seen as negative?
(En español, aquí)