What an Emergency Medicine Physician thinks about ML in healthcare

A summary of my interview with an Emergency Medicine Physician. This is one of my 18 interviews with clinicians for my MPH capstone (link here) for UC Berkeley’s School of Public Health.

Visit my Building Trust and Adoption in Machine Learning in Healthcare site (link here) for the abridge and full version of my MPH capstone as well as upcoming summaries of interviews and additional research.

Note that this interview is a de-identified summary for ease of reading and privacy. It has been approved for distribution by the interviewee.

“I also think about the ethical questions around how these ML tools get deployed in an equitable way that is usable for all patients.”

Job Background

I am an Emergency Medicine Physician at a health system. In emergency medicine, we see the first few hours of most patients’ experiences in the healthcare system. We are required by law to screen every person who walks in our door for medical needs, stabilize them, and then figure out what the next steps are. This means that I see the full gambit of people from wealthy to poor, from sick to — unfortunately — not that sick. I think I will work for another 35 years.

Familiarity with ML in healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

I know that ML algorithms are being developed in various parts of medical care. I am less likely to trust ML algorithms because we cannot figure out what they are actually doing, there are no clinical guidelines, nor are there any trials that I am aware of. What I like about traditional clinical decision support tools is that you can clearly tell how they come to their recommendations.

Past and future use

Have you used any ML tools? Would you?

As far I know, I have not. I would only use ML based on the context. If an ML tool became a standard of care with verified trials and studies, then sure.

Excitement and concerns

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

From an exciting perspective, I think there is a lot of opportunity to build ML tools where there are a lot of data. For example, each CT scan is a grouping of 3,000 images with many pixels per image. That is a ton of data. I see a chance for automation there, and that is exciting. However, I see other situations where the data are bad, and companies intentionally or unintentionally push bad ML tools as a result.

Ethics and privacy

Where do ethics play into this? What could go wrong, or what could be done well?

That is a good question. In medicine, we say, “Do no harm.” So, I think there are ethical considerations if these ML tools push clinicians to treat patients more aggressively. Or, I also think about the ethical questions around how these ML tools get deployed in an equitable way that is usable for all patients. I also worry about the ethics on what data are being used. We need to make sure that we don’t make the same mistake that we are making in clinical trials, where the people being tested are disproportionately white men and the real patients are more diverse. So, we have to be thinking about that. The list goes on: What about how the insurance companies are incentivized and incentivizing all of this?

How does privacy fit into all of this?

I don’t believe in privacy. I believe it should exist in principle, but I don’t believe that it actually does anymore. We are in a post-privacy world.

How should the data be used? Who should or should not have access to it?

Personally, I think data should be democratized but in a safe way. If insurance companies are paying for the data to be created — via tests or clinical visits — then they should have access to it. If the data is about a patient, which it always is, then they should have access to it.

Who else should help inform you or decide for you if an ML tool is ethical and sufficiently private?

I really don’t know. The people who are developing the ML tools are doing it for profit. So, it would be hard to get good answers from them. Why would developers police each other when there is money to be made? So, we would need some other third party to help out. Also, I think we need to seriously consider the patient’s voice to make sure their preferences are considered.

ML knowledge and model explainability

At what level do you need to understand how the model makes its prediction?

I would probably want to look at the original dataset to make sure that it is representative of the patients I care for. I don’t need to know how the model works exactly, just that the data make sense. I would also need to see that it was reliably reproduced and could be better than a human.

External validation needs

For you to be willing to use an ML tool, what external validation would you need to see? What types of government and/or non-government institutions would play a role?

I would need to see some trustworthy supervisory body weigh in — for example, the American Medical Association or FDA — to deem it reliable. This would also give me some legal protection when using it.

Clinical education

How would clinical education be impacted?

In the same way that other technology, like the EHR or CT scans, impacted education. It will be a tool that we use and need to learn how to use.

Desired use cases

Where are there opportunities to assist clinicians with ML? Imagine this: A world-class technology company, developed an ML tool that suggests possible diagnoses or triages a patient population. What is the best thing for them to build now and why?

Anything that has access to tons of data is a good use case. For example, doing diagnosis with CT scans or MRIs. I also think simplifying clinical trials would also be a good space to build products in. Also, any cheap technology that can be deployed in underserved areas would be very valuable. Lastly, helping to improve virtual visits and maternal care are also good use cases for ML.

Implementation

When an ML tool gets implemented, how should that be done? Who should have access first; who should not?

You need to know what care resources are available before you think about implementing some type of ML tool that directs patients to these resources.

Buying process

What data, references, and promises would they need to learn about to ultimately say yes or no?

Hospitals make decision for certain reasons, oftentimes financial. If an ML tool were to automate a step within a specific DRG, then that will reduce cost to the hospital and keep revenue the same. Win. But if an ML tool were to automate a separate billable procedure, and that automation is then not billable, then you lose money. Lose, but not necessarily so. If your ML tool is automating a low reimbursement activity and the time savings can be spent on higher reimbursement activities, that is great for hospitals. So, these are types of questions that hospitals will be thinking about. They also want to know how workflows may change, and how the ML tool will impact fee for service versus value based care patients.

--

--

Harry Goldberg
Building Trust and Adoption in Machine Learning in Healthcare

Beyond healthcare ML research, I spend time as a UC Berkeley MBA/MPH, WEF Global Shaper, Instant Pot & sous vide lover, yoga & meditation follower, and fiance.