What a Cardiologist thinks about ML in healthcare

A summary of my interview with a Cardiologist. This is one of my 18 interviews with clinicians for my MPH capstone (link here) for UC Berkeley’s School of Public Health.

Visit my Building Trust and Adoption in Machine Learning in Healthcare site (link here) for the abridge and full version of my MPH capstone as well as upcoming summaries of interviews and additional research.

Note that this interview is a de-identified summary for ease of reading and privacy. It has been approved for distribution by the interviewee.

“I think it is a bad idea for young clinicians to use ML. There are a lot of subtleties that exist, and if you use ML, then you don’t get the knowledge nor art form of medicine.”

Job Background

I am a Cardiologist at an academic medical center. Beyond my 10% of time teaching, I take care of patients in the hospital and clinic. I diagnose patients through interpreting nuclear cardiology studies, echocardiography, and vascular disease studies. Most of the diagnoses are things like coronary artery disease, arrhythmias, congestive heart failure, and hypercholesterolemia. I plan to work for about 13 more years.

Familiarity with ML in healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

I haven’t heard a whole lot. I think they want to keep the details private so that they can more-easily replace us! I am kidding about that. In all seriousness, I don’t think that ML will replace clinicians anytime soon.

Past and future use

Have you used any ML tools? Would you?

I do use a voice dictation tool on a daily basis, which converts my voice into text and sends it to the EHR. From a clinical perspective, I haven’t used anything to my knowledge; but I would be open to it, if it helped.

Excitement and concerns

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

I am excited about it decreasing medical errors. Humans have plenty of errors, so a machine may be able to improve care. On the other hand, I am skeptical about the data used by these ML tools. Data are often wrong; so, bad data in means bad predictions out.

I think another exciting part of ML is its ability to alert clinicians about potential diagnoses and treatment that may have been overlooked. A lot of formulas and their results are buried deep inside the EHR, so we miss them sometimes. Having a smart notification would help.

Ethics and privacy

Where do ethics play into this? What could go wrong, or what could be done well?

I never thought about ethics being tied to ML. I understand it to be some algorithm that delivers predictions once you put data in. So, I am not sure where the ethical issues are. I guess when you think about privacy concerns, then there may be some issues; but that is true about all digital technology. In some way, using ML tools would be more ethical because it is a machine and not a human with bias.

Who else should help inform you or decide for you if an ML tool is ethical and sufficiently private?

When you work for a large academic medical center like mine, then the decision is made by some central board of reviewers.

ML knowledge and model explainability

At what level do you need to understand how the model makes its prediction?

I am not sure. We use some apps and formulas to help predict cardiovascular risk. I don’t know all the formulas that are going on in the background, and so that is a little disconcerting, but I still use them. It probably would be the same thing in ML.

External validation needs

For you to be willing to use an ML tool, what external validation would you need to see? What types of government and/or non-government institutions would play a role?

The claims would have to be substantiated by data. We already use many formulas to predict the risk versus the benefit of treatments. So, we would have to see if these algorithms are validated by either a study or by a consensus of experts.

There are already a lot of guidelines for diseases. These are formed by a panel of experts who review all of the data in order to simplify the findings for other clinicians. So, maybe a consensus panel would create a guideline for the use of an ML tool. There is the American College of Cardiology who works with the American Heart Association. There is also the European Society of Cardiology. Those societies cover the majority of the developed world.

Clinical education

How would clinical education be impacted?

I think it is a bad idea for young clinicians to use ML. There are a lot of subtleties that exist, and if you use ML, then you don’t get the knowledge nor art form of medicine. So maybe ML would be good for experienced clinicians, but not those going through early medical education.

Desired use cases

Where are there opportunities to assist clinicians with ML? Imagine this: A world-class technology company, developed an ML tool that suggests possible diagnoses or triages a patient population. What is the best thing for them to build now and why?

The big issue of building any good ML tool is access to all of the data. I am no IT expert, but I know that there is a separate computer system for the labs, clinical notes, bills, pharmacy scripts, and scans. If we could get a lot of data together, then I think there is a lot of value in medicine reconciliation and patient triaging.

Implementation

When an ML tool gets implemented, how should that be done? Who should have access first; who should not?

I think it needs to be integrated with the existing systems and EHRs that clinicians use already. But I know there are many different EHRs across the country, so that would be a hard thing to do.

Buying process

What data, references, and promises would they need to learn about to ultimately say yes or no?

Our academic medical center would need to see that the ML tool was cost effective. It also needs to be user friendly. Lastly, a lot of these types of decisions are made with consideration for insurance plans who are trying to save money by making people healthier. So, you need to see if it can do that. And yes, it would need to improve the quality of care.

--

--

Harry Goldberg
Building Trust and Adoption in Machine Learning in Healthcare

Beyond healthcare ML research, I spend time as a UC Berkeley MBA/MPH, WEF Global Shaper, Instant Pot & sous vide lover, yoga & meditation follower, and fiance.