What a Urologist thinks about ML in healthcare

A summary of my interview with a Urologist. This is one of my 18 interviews with clinicians for my MPH capstone (link here) for UC Berkeley’s School of Public Health.

Visit my Building Trust and Adoption in Machine Learning in Healthcare site (link here) for the abridge and full version of my MPH capstone as well as upcoming summaries of interviews and additional research.

Note that this interview is a de-identified summary for ease of reading and privacy. It has been approved for distribution by the interviewee.

“I think if systemwide diagnostic accuracy can increase, then that will be more important to humankind than what other changes happen to the profession.”

Job Background

I am a Urologist, and I focus on healthcare administration as an executive at a pediatric health system. I now focus my time on strategic planning and operational improvement. I figure that I have 10 years left until retirement.

Familiarity with ML in healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

There is a lot written about ML in healthcare, and I actually was part of a hospital that built an ML algorithm.

Past and future use

Have you used any ML tools? Would you?

I haven’t used any at my pediatric health system, but I have no doubt that someone here is building something for us. We generate so much data — like genomic, clinical, and population health data — so we are a great place to build ML algorithms.

Excitement and concerns

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

Concern and excitement depend on the use case; are we talking about individual diagnoses or population health triage? I see a lot of power in using ML to focus limited resources, which is the ultimate challenge in healthcare. ML, like anything else, will have unintended consequences, but that isn’t a reason to avoid it. It is just a reason to be careful and try to anticipate those issues as we go along. If we could more efficiently deploy limited healthcare resources, there would be not only a huge financial ROI but also a huge quality ROI.

On the individual diagnoses side, it may not be correct all of the time because the most likely diagnosis is only the correct diagnosis a certain percent of the time. However, these ML algorithms can bring ideas to clinicians who are very busy thinking about many things in this busy world of medicine. Yet, we want to be careful not to rely solely on these models. At the end, I think the upside is better than the downside.

I do believe that there are certain segments of clinicians whose jobs will be replaced by ML. There is some good evidence that the patterns for detecting disease, such as breast cancer, can be recognized by ML at an ability that eclipses that of many radiologists. So this brings into question, and even concerns by me, about what it means for the clinical professional as a whole and what it will look like in 20 years. We will always need a human to synthesize and communicate information coming out of these models to their patients because patients are not medical experts and the information needs compassion when communicated. I think if systemwide diagnostic accuracy can increase, then that will be more important to humankind than what other changes happen to the profession. With that being said, we need to be very mindful of how we redeploy these highly-educated and highly-skilled individuals.

Ethics and privacy

Where do ethics play into this? What could go wrong, or what could be done well?

That is a big question. I think about nightmare scenarios where ML algorithms are being used to make decisions about end of life resources. To me that would cross an ethical boundary. We are not ready for decisions on rationalizing critical care.

How does privacy fit into all of this?

Privacy is not a specific issue for ML; it is an issue for all digital technologies, and I have been wrestling with it my entire career. We see this issue in all parts of IT, data science, and ML. We have built huge registries — which are quite useful for ML research. However, we need to think hard about who has access to these systems, and I have been doing so for much of my career. Maybe ML exacerbates privacy concerns, but I admittedly don’t know enough about ML to know if that is true.

ML knowledge and model explainability

At what level do you need to understand how the model makes its prediction?

The most important thing to my clinician colleagues and me is that we understand what the inputs are. When we built that readmission ML tool, we had 10,000 types of data; but at the end, we found that 23 were the key drivers of decision making. The front-line clinician doesn’t need to understand the math and CS behind the ML models, but they do need to understand what data are the drivers of the predictions. They need to make sure that those drivers actually make sense.

External validation needs

For you to be willing to use an ML tool, what external validation would you need to see?

It is going to depend on the purpose. If this ML tool is for pure research, then there is less external validation needed. But on the clinical application side, we would need to see ROI demonstrated at multiple other sites first. Lastly, if we are co-developing something, then we will run our status quo process in parallel to confirm that the ML tool works.

Desired use cases

Where are there opportunities to assist clinicians with ML? Imagine this: A world-class technology company, developed an ML tool that suggests possible diagnoses or triages a patient population. What is the best thing for them to build now and why?

I think there are a few interesting areas. First, in the inpatient space, many people have been working on rapid response efforts, such as sepsis detection. It is a process that alerts clinicians to patients that show subtle but identifiable signs of conditional decline but need rapid help before they bottom out. I have seen some successful approaches here, but nothing is perfect. However, it is unfair to expect that, since ML algorithms are significantly improving on the status quo. We are searching for improvements in sensitivity and specificity.

Second, in the population health space, many are thinking through ways of better deploying limited resources — something I am continually going back to. Again, there is a huge ROI opportunity here, not just financial but also quality of care and experience. I spend much of my time thinking about our clinician network and how best to organize it to better care for our patients.

Lastly, in the prognostic space, I think there is a lot of opportunity to predict if a patient will in the future be diagnosed with a disease. Given that we are working with pediatric patients, it is important for the overall healthcare industry to use preventative medicine well with these kids so that they do not become sick adults. These types of tools would be useful for both providers and payers.

Implementation

When an ML tool gets implemented, how should that be done? Who should have access first; who should not?

I do not spend much time in the weeds of implementing technology; however, I think the technical team needs to have someone on the clinical team by their side. This is not just for the interface and workflow but also for the model development itself. Clinicians can really help data scientists figure out what data should be used and if the predictions make sense. There should be a clinical — technical partnership from the start, or else you would have a lot of rework.

Buying process

Where you practice medicine, who all would be involved in making the decision to purchase and use an ML tool?

There are a lot of different people who are involved with our decision making. I can think of people like our CIO, CTO, CMO, and our CMO’s data science team. We also have a significant research institute where a lot of data science work is going on, and they have their own research budget.

--

--

Harry Goldberg
Building Trust and Adoption in Machine Learning in Healthcare

Beyond healthcare ML research, I spend time as a UC Berkeley MBA/MPH, WEF Global Shaper, Instant Pot & sous vide lover, yoga & meditation follower, and fiance.