What (another) Emergency Medicine Physician thinks about ML in healthcare

A summary of my interview with a second Emergency Medicine Physician. This is one of my 18 interviews with clinicians for my MPH capstone (link here) for UC Berkeley’s School of Public Health.

Visit my Building Trust and Adoption in Machine Learning in Healthcare site (link here) for the abridge and full version of my MPH capstone as well as upcoming summaries of interviews and additional research.

Note that this interview is a de-identified summary for ease of reading and privacy. It has been approved for distribution by the interviewee.

“You can win a Kaggle competition on performance, but it doesn’t mean much when I am thinking about using it with patients’ lives.”

Job Background

I am the VP of Digital Health for a nonprofit health system and a practicing ER physician. I was trained in Emergency Medicine and have been boarded in Clinical Informatics. While I do practice some medicine in order to help patients and stay connected to healthcare delivery, much of my time is spent on healthcare administration. I spend much of my time identifying and implementing strategies and approaches that can reconfigure how care is being provided. I am the person who looks at what is next in healthcare and tries to bring it into my health system — from new technologies, to new partnerships, to new business models. With regards to ML, I helped lead all three of our health system’s partnership arrangements with ML companies. I expect to be working for roughly 15 more years.

Familiarity with ML in Healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

I probably have more knowledge about ML than the average person in healthcare given my background and current focus. I have learned that you say AI when you are in PowerPoint, and you say ML when you are actually doing the work. People have the expectation that ML is god-like and will solve everything, but it is actually much more brittle now and has many limitations; yet there has been huge progress in recent years. But to be clear, we are nowhere near the point where ML will be a standalone autonomous replacement for what human clinicians are doing right now.

I am seeing that ML solutions are largely still in development and there aren’t many ready options to pull off of a shelf. Where I do see things being used is in lower-stakes spaces, like population segmentation or assisting with image processing, such as nodule labeling.

Past and Future Use

Does your health system use ML tools?

Yes. We use it for our population health efforts as a patient segmentation tool that constantly scans our EHR records for patients who need higher intensity care management programs. We are using K-nearest neighbor-type algorithms. What’s hard here is that there is not a gold standard or right decision on what to do for these patients at the individual level, it is very subjective on who needs these extra services.

On the patient-facing side, we just started to deploy a low-acuity ML-supported triage chatbot from a startup. Patients can interact with this chatbot, answering various questions about their symptoms, and then eventually information is shared with a clinician at the end, who makes the final decision on where to direct the patient.

There are companies coming to sell me on ML tools that can replace my clinicians. I am not bitter or resentful about this; I am simply concerned that their technology is unproven and shouldn’t take humans out of the loop just yet. There are parts of medicine that are algorithmic but a lot of it is still human. The average patient doesn’t understand statistics or medicine, so we need human clinicians to simplify the ML outputs for them.

Excitement and concern

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

I am very excited about getting help with the attention problem: who and what do you need to be paying attention to? When looking at a patient population, there are some patients that need some extra help. You may not get it perfectly right, but getting a ranking is helpful. These triage tools are more common now, but when you zoom in to a decision on the individual patient, like a diagnosis, then predictions could go awry, especially if you are demanding a binary answer: yes/no. We will probably get to the robot clinician in the exam room someday, but we aren’t near it now. Smart distributions and averages are where much of the magic is today.

I am concerned about data availability to make strong predictors. In healthcare, we have kept data private, and for good reason. However, this limits the research and model development, leading to subpar solutions. There are also inherent human-related issues with the data. For example, the same X-ray can lead to very different diagnoses if read by different radiologists.

I am also concerned about some of the biases that can get perpetuated — whether it be biased data or biased labels. The fact of the matter is that there is no knowledge nor context here; these ML tools see a pattern of pixels that they think they have seen before and then say “pneumonia”. If not done well, this can have significant negative impacts on social justice.

Ethics and privacy

Where are your concerns from an ethics and privacy perspective?

I am coming at this from a more pragmatic, or you could say, pessimistic perspective. The fact of the matter is that Big Tech companies already know everything about me, including where I am at this exact moment. Yes, appropriate regulations should be there and broadly do exist. The basic framework of HIPAA and the intent of health systems today are fine. We may need to make slight updates in this new world of ML. I don’t think there needs to be a fundamental change or shift.

There is a balance that you need to play. If we want to make better algorithms to help more people, then we need access to data; however, we need to be careful about privacy and hedging against unintended consequences. I think it is important to limit what you share outside your health system’s walls. I also think it is important to gather as much outside data as possible and use it in novel ways to help patients.

ML knowledge and model explainability

At what level do you need to understand how the model makes its prediction?

It is not that important for clinicians to understand how an ML solution makes a prediction. It would be hard to find a clinician who could name a ML model, let alone the details and if a certain approach is right. The clinicians want to know what data are being used, such as claims, and the performance metrics. They don’t need to know the math behind it.

External validation needs

For you to be willing to use an ML tool, what external validation would you need to see?

If we are buying an off-the-shelf solution, then I am going to look at a number of performance metrics. Some may include R^2 and AUC, which are just ways of getting at sensitivity and specificity. I also want to see some external body like the FDA give it the stamp of approval. You can win a Kaggle competition on performance, but it doesn’t mean much when I am thinking about using it with patients’ lives.

Desired use cases

Where are there opportunities to assist clinicians with ML? Imagine this: A world-class technology company, developed an ML tool that suggests possible diagnoses or triages a patient population. What is the best thing for them to build now and why?

A hugely valuable ML solution would be something that understands someone’s behavior outside of the health system and knows how to best engage with that person to drive changes. In short, it is a utilization management tool. For example, this solution could have the ability to gather online consumer behaviors and other data streams to predict who is going to engage with the healthcare system, in what pattern, and how to effectively interact with them and change behavior.

An ML company would do this because they would know that this is incredibly valuable to a health system. With value based care, health systems are taking on more risk and thus are trying to find ways to effectively manage it. I see a lot of attention and money going into this space. Now, our health system needs the ability to predict risk and push unnecessary utilization down.

See, we don’t have an organized CRM approach to healthcare. Instead, we have big databases that are transactional: a really big and complicated medical log file with timestamps. We don’t have the ability to say, “You panic for small medical things and prefer texts before you leave the house; yet your buddy with the same issue needs a Snapchat in the afternoon.” Something like this would be insanely valuable and is the core competency of the Big Tech companies.

Buying process

Where you practice medicine, who all would be involved in making the decision to purchase and use an ML tool?

I have to tell you that it is very hard to navigate a health system as an outside vendor. We have a complicated org chart. For the deals that I was part of, we included the VP of Population Health, the President of the physician organization, Chief Strategy Officer, and Chief Financial Officer.

Given the scope of my role, I spend a lot of time evaluating outside solutions and then present those select few that I think are best. Our health system does have an “NIH” issue — not invented here. We do like to develop things in house, and in some ways for good reason. We know our systems, processes, people, and patients. But we do need to also consider outside offerings, and I am happy to say that there has been a small but growing group that appreciates this.

What data, references, and promises would they need to learn about to ultimately say yes or no?

In order to make a decision on an ML tool, we need to know about a number of things related to the dataset, such as the size, the richness, if it is multi-site, how clean it is, and comparative it is to our health system.

When we are developing or co-developing ML tools we consider the team, brand, and their experience in healthcare. It is hard for some ML engineers in a garage to create a product worth using in healthcare. First, minor performance gains are less important in a health system than they are in a Kaggle competition. You need to really know how healthcare works and how to integrate into a health system. If you don’t have the knowledge, or to be honest, the patience to work well with both IT and clinicians, it just won’t work out for you. Therefore, it is hard for me to respond to a cold email from an unknown ML startup. I am more excited about Big Tech brands that I know and trust.

--

--

Harry Goldberg
Building Trust and Adoption in Machine Learning in Healthcare

Beyond healthcare ML research, I spend time as a UC Berkeley MBA/MPH, WEF Global Shaper, Instant Pot & sous vide lover, yoga & meditation follower, and fiance.