What a Radiologist thinks about ML in healthcare

A summary of my interview with a Radiologist. This is one of my 18 interviews with clinicians for my MPH capstone (link here) for UC Berkeley’s School of Public Health.

Visit my Building Trust and Adoption in Machine Learning in Healthcare site (link here) for the abridge and full version of my MPH capstone as well as upcoming summaries of interviews and additional research.

Note that this interview is a de-identified summary for ease of reading and privacy. It has been approved for distribution by the interviewee.

“It has to have a very clear value proposition. What is the ROI going to be? Will there be a meaningful return? These companies tell me how we will practice better, but I also need to know how we will save or make money. Sadly, the system doesn’t incentivize us to do better, it incentivizes us to work faster.”

Job Background

I am a Radiologist at a large radiology practice. I am also an executive administrator within the group, working in many facets of the business, including operations, PR, communications, public policy, and advocacy. As a clinician, I read and interpret scans for referring physicians, helping with diagnoses and treatment planning. I plan to work for another 20 years.

Familiarity with ML in healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

Oh, I have heard a lot. ML is a big deal in radiology and even considered controversial in some spaces. A famous VC said that we should stop training radiologists all together, which of course ruffled some feathers. However, people have been saying forever that radiology would be taken over by machines, and we can clearly see that that hasn’t come. My radiology practice does have a team looking into ML.

Past and future use

Have you used any ML tools? Would you?

Yes, I have. There are tools today that give predictions on interpretations, which are pretty cool and somewhat useful. But I am more excited about how ML will help distill relevant information for me and my colleagues. We as radiologists see incidental findings — famously called “incidentalomas” — and need to follow up on them. For example, if we are looking at the kidney and see something that we think might be cancer, then we need to make a decision to biopsy now, follow up later, or let it go. But there are consensus statements, or best practices, on how to evaluate these types of things. No one has time to stay up to date on everything, so our practice built a program that showed radiologists the relevant consensus statements so that they could be assisted on what to do for their patients.

Excitement and concerns

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

It is not concerning nor exciting to me; it is reality. We need to accept that this is happening. I like the comment that someone else said at a conference, “ML will not replace radiologists, but instead radiologists who use AI will replace those who don’t.” If I had access to a tool that would make me better and faster, then I would use it.

The fact of the matter is that humans are bad at remembering everything, so we need ML tools to make us better. However, humans are really good at understanding the broad context of things, where ML tools are narrow. For example, if a ML recommendation is to follow up in 6 months or go for surgery, that may be right for most patients. However, the model probably won’t see the clinical context of a patient who has terminal cancer. So, in this situation, we see why we need the radiologist and oncologist to still be in the driver seat.

Ethics and privacy

Where do ethics play into this? What could go wrong, or what could be done well?

When I think of ethics, I think about who owns the data. This can be controversial.

Who else should help inform you or decide for you if an ML tool is ethical and sufficiently private?

I have deferred my ethical questions to others. There are bioethicists who we work with who do the important thinking on this.

ML knowledge and model explainability

At what level do you need to understand how the model makes its prediction

There is a bell curve on this. I am on the bottom end of curiosity, and I think most radiologists are in my camp. I just care that these ML tools get to the right answer for my patients. So, this is all to say that the “black box” is not as big an issue, but of course it is for some others, like regulators.

External validation needs

For you to be willing to use an ML tool, what external validation would you need to see? What types of government and/or non-government institutions would play a role?

Something has to be FDA approved or cleared. Also, I think we are going to want to see large academic medical centers as first users, then I think big health systems and smaller ones will be followers. That is what happens with most medical technologies — start in academic centers, then go out into the community.

I do think it is possible for professional societies, such as the American College of Radiology, to get involved with this, but I am not sure. There are a lot of questions about how reimbursement will work, so these societies will want to be involved to clarify things.

Clinical education

How would clinical education be impacted?

This goes back to another famous quote around how we overestimate things in the short term and underestimate things in the long term. I think radiology will be very different in the long term. Will radiology be here in 10 years? Absolutely. Will it look meaningfully different? Yes. But in the next two years, I am not sure how different it will be. We are in the Gartner hype cycle at the moment. I am not sure if we are at the peak or in the trough of disillusionment, but regardless there is much more progress and change ahead of us.

Desired use cases

Where are there opportunities to assist clinicians with ML? Imagine this: A world-class technology company, developed an ML tool that suggests possible diagnoses or triages a patient population. What is the best thing for them to build now and why?

I think you will see a lot of ML tools running in the background of healthcare — checking our clinical work, optimizing coding and billing, and other things. I think quality improvement is a key area for ML, where we can use these tools to almost over-read for our radiologists to make sure things are going well at a population level and spotting spaces to intervene where needed.

Implementation

When an ML tool gets implemented, how should that be done? Who should have access first; who should not?

I can tell you how we implemented ML tools in our practice. We did pilots in certain sites, then we sequentially rolled it out to other sites. This helps with fixing bugs, building clinician cultural support, and simplifying clinician education. We would never do a big bang roll out; instead we focused one site at a time, saw value and ROI, then continued to more. We were dynamic, and changed the velocity as we saw good or bad signs.

Buying process

Where you practice medicine, who all would be involved in making the decision to purchase and use an ML tool?

We are an exception, not the rule. I hear that the median radiology practice is nine radiologists. Then there are regional practices with 100 to 200 radiologists. We are big enough to have a sophisticated administrative team with people who think about technology and data science. We even acqui-hired a whole development team, who we were co-developing an ML tool with. But again, we are the exception, not the rule. Think about if you have a 20-person practice, then you probably have three radiologists interested in ML, but they don’t know much. These people work 7:00am to 5:00pm, so they spend the rest of the time learning what they can about ML. So, it is up to those three to make the purchasing recommendation. But in those practices, they probably don’t have the real power to make a decision; it is probably an older person who spent their career doing what they have always done. So yea, it is really hard to sell into radiologist practices. And this talk about replacement doesn’t help.

What data, references, and promises would they need to learn about to ultimately say yes or no?

I need to know how we will get paid differently. If we buy new technology that makes us 20% faster, then our RVUs will go down, and we will make about the same amount of money. So now we make the same money, but we spent a bunch on this tool. How does that make any financial sense? It has to have a very clear value proposition. What is the ROI going to be? Will there be a meaningful return? These companies tell me how we will practice better, but I also need to know how we will save or make money. Sadly, the system doesn’t incentivize us to do better, it incentivizes us to work faster.

--

--

Harry Goldberg
Building Trust and Adoption in Machine Learning in Healthcare

Beyond healthcare ML research, I spend time as a UC Berkeley MBA/MPH, WEF Global Shaper, Instant Pot & sous vide lover, yoga & meditation follower, and fiance.