What a Radiation Oncologist thinks about ML in healthcare

A summary of my interview with a Radiation Oncologist. This is one of my 18 interviews with clinicians for my MPH capstone (link here) for UC Berkeley’s School of Public Health.

Visit my Building Trust and Adoption in Machine Learning in Healthcare site (link here) for the abridge and full version of my MPH capstone as well as upcoming summaries of interviews and additional research.

Note that this interview is a de-identified summary for ease of reading and privacy. It has been approved for distribution by the interviewee.

“Much of my time, up to eight hours per week, is spent on detailed interpretation and labeling images. If I had a reliable ML tool, I could better focus on acute issues of my patients or treat more people.”

Job Background

I am a Radiation Oncologist at a small private practice. In my clinical practice, I see patients who are newly diagnosed with cancer, decide what treatments they need, and then do them. That typically includes radiation planning, where we scan patients then spend a couple of hours mapping out where exactly the tumor is. After that, we do anywhere from one to 44 treatments. I plan to work for another 30 years.

Familiarity with ML in healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

I haven’t heard much. In my practice of radiation oncology, I have heard that ML could be used to help with contouring — the process of mapping out where tumors are. I suppose that this could be made easier if the ML tool had information about the diagnosis, staging, and general space reference.

Past and future use

Have you used any ML tools? Would you?

I haven’t but I would, if it were a time saver. Much of my time, up to eight hours per week, is spent on detailed interpretation and labeling images. If I had a reliable ML tool, I could better focus on acute issues of my patients or treat more people.

Also, I think dosimetrists would benefit greatly from ML. They are the ones who execute my radiation plans, and it is a very manual process.

Excitement and concerns

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

I am concerned about these ML tools being used as shortcuts for real clinical medicine. I worry that clinicians would not learn as much.

I am excited by possibly having more unification and standardization. People often call radiation oncology an art form of medicine, but there are indeed things that should and shouldn’t be done. If ML could help bring some consistency, I think that would be great for patients.

Ethics and privacy

Where do ethics play into this? What could go wrong, or what could be done well?

I think as long as predictions are being verified by a clinician, then you can ethically say that you have the knowledge and background to trust what was generated by the ML tool. But if it becomes a black box where you cannot verify things, then there are some ethical concerns. You need clinicians to be able to critically evaluate what is going on.

How should the data be used? Who should or should not have access to it?

If patients are contributing to this for research or commercial use, then patients need to be informed. We don’t ask for patients’ consent when making training sets and maybe we should.

Who else should help inform you or decide for you if an ML tool is ethical and sufficiently private?

Ideally, it would be another clinician who was using it in their clinic.

Do you trust these ML tool developers to have access to these data? Why or why not?

Not really, but it’s hard for them not to. I think with healthcare data, people have an added layer of concern for security. With all the EHR data that I use, there is a lot to be concerned about.

ML knowledge and model explainability

At what level do you need to understand how the model makes its prediction?

Very rudimentary. I just want to know what was used to train it.

External validation needs

For you to be willing to use an ML tool, what external validation would you need to see? What types of government and/or non-government institutions would play a role?

I would need to see some sort of clinical trial in which an ML tool was deployed, and I get that it is hard to do a trial like this. I would also want to see findings on if there are improvements in the quality of care, clinic throughput, and clinician and patient quality of life.

I am sure there would be guidelines produced by our professional society, but I haven’t seen anything yet.

Clinical education

How would clinical education be impacted?

I think there will need to be a way to not have students see the benefits of ML, so that they can train and learn without it first. I don’t think students should have access to automation tools, so that they can become intelligent clinicians. Even if these ML tools are 100%, clinicians should be able to critically evaluate what is right and wrong.

Desired use cases

Where are there opportunities to assist clinicians with ML? Imagine this: A world-class technology company, developed an ML tool that suggests possible diagnoses or triages a patient population. What is the best thing for them to build now and why?

It would be helpful to get support in diagnosing patients, or even better predicting what types of complications that they will have during treatments. I would love to know more information about how a patient will recover, so I can get ahead of things. I do some of that now, but more guidance would be great.

Implementation

When an ML tool gets implemented, how should that be done? Who should have access first; who should not?

Clinicians would have to go through training — learn how it is used, learn the pitfalls, and practice under some safe environment with a clinician who has used it before.

Buying process

Where you practice medicine, who all would be involved in making the decision to purchase and use an ML tool?

I think the purchasing decision would start with the physicians in the department, then it would go up to the hospital administration.

What data, references, and promises would they need to learn about to ultimately say yes or no?

I assume that the administration wants to see staff savings, more patients, or lower patient risk.

Also, I would guess that the ML company would share this information and colleagues from other practices would endorse it. Conferences also are good places to learn about this stuff, since there are booths and peers all over the place.

--

--

Harry Goldberg
Building Trust and Adoption in Machine Learning in Healthcare

Beyond healthcare ML research, I spend time as a UC Berkeley MBA/MPH, WEF Global Shaper, Instant Pot & sous vide lover, yoga & meditation follower, and fiance.