What (another) Internal Medicine Physician thinks about ML in healthcare

A summary of my interview with a second Internal Medicine Physician. This is one of my 18 interviews with clinicians for my MPH capstone (link here) for UC Berkeley’s School of Public Health.

Visit my Building Trust and Adoption in Machine Learning in Healthcare site (link here) for the abridge and full version of my MPH capstone as well as upcoming summaries of interviews and additional research.

Note that this interview is a de-identified summary for ease of reading and privacy. It has been approved for distribution by the interviewee.

“In a world with full-blown ML, training clinicians would be totally different. I would have to be both a data scientist and a clinician. Our jobs would be all about communication with patients and communicating with models — being data science and medical science translators.”

Job Background

I am a Primary Care Physician at an integrated delivery network. I spend about 80% of my time on delivering care to patients and the remainder teaching and some administrative work. I have a standard 20-minute appointment 20 times per day five days per week. These patients present with a wide range of needs, including general planned preventative care to urgent-like care. I have been an attending physician for five years now and think I have about 25–30 more years of patient care in me.

Familiarity with ML in healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

For a clinician, I have heard a lot, naturally, because someone in my family works at an ML tool company. There are a lot of data scientists on their team, so sometimes when “work comes home,” I hear about it. I understand basic concepts like training vs. test sets as well as that there are various types of algorithms.

Past and future use

Have you used any ML tools? Would you?

Unfortunately, I think that ML is pretty far away from my clinic. I would be excited about using something like that. There are many good uses of ML in healthcare, and I think they should be adopted where possible.

It is hard for me to conceptualize how I might use ML in primary care, since so much of my job is relational and depends on human connection and emotions. But yea, I would consider myself an early adopter given my family member’s line of work.

Excitement and concerns

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

I am concerned about how bias can get into training data, and how that can make patient care worse for us. I also honestly worry about how ML might impact the art of medicine. If it replaces clinicians, then it would remove some of the most important parts of the patient experience, like physical touch, empathy, motivational interviewing, and education. In this dystopian future, medicine would lose the human connection.

On the other hand, we have to admit to ourselves that quite frankly we clinicians are not really good at our jobs all of the time. There are serious limitations to medical science with many diagnoses not well understood and serious limitations on what is available to treat patients. I think we only have scratched the surface of medical knowledge and ML helps us make big steps forward. I also am excited about ML reducing the menial parts of my job, so that I can focus on empathy and human connection.

Ethics and privacy

Where do ethics play into this? What could go wrong, or what could be done well?

I personally tend to be less of a skeptic about privacy than other people. The data need to be de-identified of course. If you are using patient data, then you probably need them to opt in. Again, I don’t concern much about privacy.

Who else should help inform you or decide for you if an ML tool is ethical and sufficiently private?

I am a layperson when it comes to this stuff; I am not a computer scientist. How would I know if privacy were being protected? Ideally, I would be reassured by someone who knows data science and ML and clinical things to make this call. This is another big challenge with adoption, clinicians won’t trust nor understand these ML programs right away. Usually with clinical risk scores, there are established studies with simple formulas to review. ML is just so different from this, and it’s hard for any human to understand, let alone those of us who are not computer scientists. I would rely on an expert in both ML and clinical science to help me figure this all out.

Do you trust these ML tool developers to have access to these data? Why or why not?

Probably not. I don’t trust them in the sense that I don’t think they will protect my data when push comes to shove. But I also don’t care if they do things with data; I appreciate my targeted ads compared to the alternative nonsense. It is a social agreement that I have with Big Tech companies: I give you my data, you give me a good experience.

ML knowledge and model explainability

At what level do you need to understand how the model makes its prediction?

I don’t even know if the data scientists themselves understand the models. My understanding is that I will never understand ML tools as much as I would like to. I do want to make sure that there is good science behind it and that it works well, but I also appreciate that there is a black box.

External validation needs

For you to be willing to use an ML tool, what external validation would you need to see? What types of government and/or non-government institutions would play a role?

That’s really hard and is a major part of the problem. For something like a triage tool, it is hard to have robust studies. An RCT is challenging to perform and the results are not reliable because the results need to be measuring longer-term outcomes, and the data and outcomes need to be very clean. Yet with a diagnostics, you could do an RCT with an arm that has the ML solution and another with the gold standard solution. For example, with chest pain you could load the model with vitals and then use a coronary catheter to confirm the outcome of heart attack.

But the promise of ML is far beyond this. We hope to further medical science in situations when we don’t have gold standard tests to prove something. ML could potentially start to make these new diagnoses that we don’t even understand by grouping people and seeing their trends.

Some ML tools will need to get FDA approval and the FDA is working on ML things. From a clinician adoption perspective, I would also want to see RCTs from well-respected journals. Lastly, sign off from a medical professional society would be very helpful, but I have a hard time seeing these groups seriously take on evaluating ML programs.

Clinical education

How would clinical education be impacted?

We have had the same medical education for the past 100 years, and that needs to change. First, we need to stop teaching biochemistry. We then can reimagine how clinicians function and work with ML tools. In a world with full-blown ML, training clinicians would be totally different. I would have to be both a data scientist and a clinician. Our jobs would be all about communication with patients and communicating with models — being data science and medical science translators. Yet, I don’t see this happening anytime soon; it will be a gradual process. First, tools will assist clinicians with small things and give them a little oomph. Then much later, medical education will change. Simply put, medical practice will change before medical education.

Implementation

When an ML tool gets implemented, how should that be done? Who should have access first; who should not?

If you are going to create some ML tool, then it has to integrate with the EHR — that is the center of my life and where all of my patient data reside. Every clinician in America uses an EHR. So if an ML tool doesn’t integrate with the EHR and is a separate window, then I don’t know how anyone will use it. Yes, there are examples of urgent information being communicated to my mobile phone, but that is very rare.

For me, it would not be a problem to take time off to calibrate and implement an ML tool. It would be a harder sell to other clinicians though, if they didn’t have a strong background in technology. The real trick here is to get the buy in from the hospital, because it would cost them a lot of money to have clinicians spending time training on these tools. Regardless though, it honestly would be hard to get clinicians to spend a lot of time on training for an ML tool. When I think about something similar, like training for a small EHR update, it is hard to persuade them to take a couple of hours off.

Alternatively, maybe IT folks could help implement the tool. We have IT professionals who spend their time being at the forefront of new technology and integrating it into our integrated delivery network. Again, you need to convince the executives of the hospital to make the IT folks do this. If you are in a smaller setting — like an outpatient private practice — then something like this would be impossible because there is no one around to take time off. They eat what they kill, so they need to see patients all of the time. Yet, if there is great research on how time might be saved and more money could be earned, then this is somewhat more possible.

--

--

Harry Goldberg
Building Trust and Adoption in Machine Learning in Healthcare

Beyond healthcare ML research, I spend time as a UC Berkeley MBA/MPH, WEF Global Shaper, Instant Pot & sous vide lover, yoga & meditation follower, and fiance.