Hacking Healthcare — Why Our Future Medical Data May Be Under Attack
This week’s paper can be found here: Adversarial Attacks Against Medical Deep Learning Systems
Healthcare costs have been increasing to the point where medical care may become unaffordable for a significant portion of the US population in the near future. Many solutions have been proposed to slow down or reverse these cost increases, including passing federal healthcare programs (such as the ACA) that make medical care more efficient to limiting medical appointments to absolute emergencies.
One of these solutions is the idea of implementing artificial intelligence into medical practice. Ideally, this would reduce time spent on menial or recurring tasks, prevent inefficiency in the healthcare system, predict chronic conditions before they become irreversible, and provide better medical care to patients across the board. In fact, I’ve advocated for this solution in an opinion piece in the past.
However, researchers at MIT and Harvard Medical School have recently published work warning us that the healthcare system may not be ready for deep learning. As with any other program, medical deep learning systems can be attacked by those intending to steal private patient data, and their work reveals that medical systems may be more susceptible than most to a specific type of hacking — adversarial attacks.
The power of adversarial attacks has been seen in many areas of deep learning, including in a test of Google’s deep learning-based image recognition software (Spoiler: It failed). By making small changes to data that might not change a person’s classification of the data, you can fool deep learning algorithms into misclassifying information, resulting in incorrect patient diagnoses and potential patient harm.
Originally published at www.jordanharrod.com.