The Problem With Machine Learning In Healthcare

Models will run the world, but we should proceed with caution

Aug 24, 2018 · 4 min read
Photo by Luis Melendez on Unsplash

An article from the Wall Street Journal has been floating around online recently, discussing how models will run the world. I believe there’s a lot of truth in that. Machine learning algorithms and models are becoming ubiquitous and are increasingly trusted across industries. This, in turn, will lead us to spend less time questioning the output of these algorithms and simply let the system give us the answer. We already rely on companies like Google, Facebook, and Amazon to inform us of ideas for dates, friend’s birthdays and what the best products are. Some of us don’t even think twice when it comes to the answers we receive from these companies.

As a data engineer who works in healthcare, this is both exciting and terrifying. Over the past year and a half, I have spent my time developing several products to help healthcare professionals make better decisions, specifically targeting healthcare quality, fraud, and drug misuse.

As I was working on the various metrics and algorithms I was constantly asking myself a few questions:

How will this influence the patient treatment?

How will it influence the doctor’s decision?

Will this improve the long-term health of a population?

In my mind, most hospitals are run like businesses, but there is some hope that their goal isn’t always just the bottom line. My hope is that they are trying to serve their patients and communities first. If that is the case, then the algorithms and models we build can’t just be focused on the bottom line (as they often are in other industries). Instead, they need to consider how things will impact the patient, how they may impact their overall health, how this metric could change the behavior of a doctor, in a potentially negative way.

For instance, the Washington Health Alliance, which does a great job at reporting on various methods to help improve healthcare from a cost as well as a care perspective, wrote a report focused on improving healthcare cost by reducing wasteful procedures. That’s a great idea!

In fact, I worked on a similar project, which is when I started to think. What happens when some doctors take and over-adjust? I am sure many doctors will appropriately recalibrate their processes. However, what about the ones that try to over adjust?

What happens when some doctors try to correct their behavior too much and cause more harm than good because they don’t want to be flagged as wasteful?

Could we possibly cause doctors to miss out on obvious diagnoses because they are so concerned about costing the hospital and patients too much money? Or worse, perhaps they rely too strongly on their models to diagnosis for them in the future. I know I have over-adjusted my behaviors in the past when I was given criticism, so what is to stop a doctor from doing the same? There is a fine line between allowing a human to make a good decision and forcing them to rely on the thinking of a machine (like Google Maps — how many of us actually remember to get anywhere?).

But are you thinking because you were told to think…or because you know what you are doing?

There is a risk of focusing more on the numbers and less on what the patients are actually saying.

Doctors focusing too much on the numbers and not on the patient is a personal concern of mine.

If a model is wrong for a company that is selling dress shirts or toasters, that means missing out on a sale and missing a quarterly goal. A model being wrong in healthcare could mean someone dies or isn’t properly treated.

So as flashy as it can be to create systems that help us better make decisions, I do wonder if humans have the discipline to not rely on them for the final say.

As healthcare professionals and data specialists, we have an obligation not just to help our company, but to consider the patient. We need to not just be data driven but be human-driven.

We might not be nurses and doctors, but the tools we create now and in the future will directly influence decisions made by nurse and doctors. We need to take that into account. As data engineers, data scientists and machine learning engineers, we have the ability to make tools to amplify the abilities of the medical professionals we support. We can make a huge impact.

I agree, models will slowly start to run our world more and more (they already do in areas such as trade, some medical diagnoses, purchasing at Amazon and more). This means we need to think through all the operational scenarios and consider all the possible outcomes — both good and bad.

Better Programming

Advice for programmers.


Written by

#Data #Engineer, Strategy Development Consultant and All Around Data Guy #deeplearning #machinelearning #datascience #tech #management

Better Programming

Advice for programmers.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade