Despite AI, the Radiologist is here to stay.
The overwhelming hype of artificial intelligence in radiology, not to mention medicine in general, is nauseating. Last year Geoffrey Hinton, a world renowned computer scientist, stood in front of a large crowd at the Creative Destruction Lab in Toronto and proclaimed that Radiologists would be out of a job in 5–10 years. His statement, “We should stop training radiologists right now!” hit headlines and sent chills down the spines of radiologists around the globe. Here’s why Geoffrey Hinton is wrong.
Artificial Intelligence conducting automated image recognition in the field of Radiology is an intriguing concept. The idea that a computer could take a patient’s medical images, and identify the abnormality without human supervision is mind blowing. Its the kind of thing only seen in science fiction, and it’s completely understandable why the world is fascinated with the idea. Cyborg radiologists could be faster, more portable, and generally more affordable than their human counterparts. The potential benefits for mankind from technology like this is massive.
The problem with this idealistic thinking is that it is not grounded in the reality of how healthcare functions, and how medical technology is developed. To explain this, lets imagine for a second that a fictional company, CyborgRads, is on a mission to create the world’s first completely autonomous artificially intelligent Radiologist.
The first steps for CyborgRad would be to create powerful A.I. algorithms that would be able to interpret images. To do this, a massive dataset of teaching files (images) would need to be collected and labelled. It is estimated that to create an accurate neural network machine learning algorithm, a minimum of 1000 training files are required (give or take). This means that CyborgRad would need to collect one thousand images of every diagnosis and abnormality defined in medical textbooks, ever. This would be a massive task even for common diagnoses, such as for pneumonia or fractures, never mind for diagnoses that only occur once in a million people or less. And these estimates are for images that clearly reveal the abnormality. For more subtle abnormalities such as a slight streak of pneumonia, or a greenstick fracture in a child, another thousand plus images would be required. This alone would be a monumental task that the industry is no where near accomplishing.
Having said this, there are billions of people on earth, and so collecting a large volume of medical images is definitely with in the realm of reason. Also, supervised machine learning is different from unsupervised machine learning, and so as the latter is further developed we may see improved efficiency in training systems. (For more on how Machine Learning and Deep Learning works, click here)
So lets assume CyborgRad accomplished the task of finding at least 1000 representative images of every diagnosis and abnormality known, from every text book and case report ever written. The company would then need to ask all of those patients for permission to use their images. Everything in the patient record, whether that be a lab test, nursing note, medication prescription, or imaging study is confidential personal information. The staggering effort required to locate, contact, and request permission to use millions, if not billions, of patient images is staggering. Now of course the imaging files would need to be de-identified, which in and of itself is a challenging task. De-identification does not preclude the need to ask patients for permission to use their images.
But for arguments sake, lets say that CyborgRad was able to find and gain permission from patients to use at least 1000 representative images of every diagnosis known to man. Lets also assume that their neural network deep-learning/machine-learning algorithms, were also as accurate at identifying diagnoses as a radiologist. Which is of course an unbelievably reaching assumption since the state of the science is quite immauture. The next step for CyborgRad would be regulatory approval.
International regulatory approval is required for all medical technology, and applications need to be approved by organizations like the Food and Drug Administration in the U.S. or Health Canada. To this date, “not one single company has ever [received] FDA approval for a clinical diagnostic device that is not overseen by a human” from Hugh Harvery’s recent article on the subject. This is also the case in Canada and most likely in other countries globally, and is probably one of the most convincing and important reasons why Geoffrey Hinton is irrefutably wrong.
The reason why doctors are paid the big bucks, is not just for their knowledge, but for their liability. Removing the doctor from the healthcare equation is an unprecedented medico-legal and ethical conundrum that has yet to be completely explored. To explain this conundrum further, let’s step outside of medicine for a moment and look at the commercial airline industry. For longer than I’ve been alive, automated flight systems have been used by airlines. “Autopilot technology already does most of the work once a plane is aloft, and has no trouble landing an airliner even in rough weather and limited visibility”, stated Jack Stewart in Wired Magazine. Even with this advanced and capable technology, pilots are still present on every flight. An expert is required for when the system malfunctions or if the system is presented with a situation it can not handle. Further, airlines and passengers alike want that expert there for peace of mind. “Humans are still better than computers at quickly assimilating unrelated facts and acting on them. Consider, for example, Captain Chelsey Sullenberger’s landing on January 15, 2009, when he successfully avoided a crash by navigating an Airbus A320 into the Hudson River” stated Arnold Reiner, The Atlantic. Its times like these where the experts take over and take control. It’s their job, they’re responsible, and they are also liable if something goes wrong.
Further, I don’t think big corporations would want the responsibility of being liable for every decision their software makes. The auto industry is currently facing this conundrum. Tesla has amazingly advanced autopilot driving capabilities built into their vehicles, yet a driver is still required in the driver seat. The driver actually needs to touch the steering wheel every once and a while to inform the car that they are still paying attention, in charge, responsible, and liable. A company like Tesla becoming liable for every one of their cars on the road would be a large legal risk. Imagine how quickly the company would go bankrupt from legal fees and settlements if a malfunction occured in just a fraction of their vehicles. It’s not worth their worry.
So, that is why Geoffrey Hinton is wrong. Radiologists will not be out of the job in 5 to 10 years. Planes will continue to need pilots, cars will continue to need drivers, and hospitals will continue to need Radiologists. The significance of A.I. is still to be determined, but I caution those who expect the hype to become a reality. Hype rarely does.