What a Dermatologist thinks about ML in healthcare

A summary of my interview with a Dermatologist. This is one of my 18 interviews with clinicians for my MPH capstone (link here) for UC Berkeley’s School of Public Health.

Visit my Building Trust and Adoption in Machine Learning in Healthcare site (link here) for the abridge and full version of my MPH capstone as well as upcoming summaries of interviews and additional research.

Note that this interview is a de-identified summary for ease of reading and privacy. It has been approved for distribution by the interviewee.

“I worry that ML will be used by people who don’t have the training to confirm the model’s output.”

Job Background

I am a practicing Dermatologist as part of a multi-specialty practice. My training is as an MD / PhD. I think I will be in direct clinical practice for about 10 more years. From there, I hope to do something else with my career that has an impact on a larger population of people, such as administration, consulting, or startups.

Familiarity with ML in healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

AI is too broad of a term currently, and ML is a subset of AI. As applied to medicine, ML can help understand patterns in data that are beyond the level of interpretation of standard human evaluation.

Past and future use

Have you used any ML tools? Would you?

I have not directly used any, and I don’t think current ML capabilities are there yet.

Excitement and concerns

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

I feel that once we get to broad adoption of ML, then we will be acquiring a lot more data. Just digitally capturing data in medicine has not been done well, so the first step is all about digitizing information. That, separate from ML, is exciting to me.

I worry that ML will be used by people who don’t have the training to confirm the model’s output. In medicine, there is a great push to moving away from physicians and to NPs and PAs, who require less training and experience. Across the board, we are starting to see diagnostic performance going down. So, my concern is that ML will be used to expand the scope of clinicians — even if it is expanding the “license” of PCPs into that of specialists — and these clinicians won’t be able to make effective judgements for patients.

Back on the exciting side, I am hoping that ML will make practicing medicine more efficient — this could mean that I ultimately see more complicated cases and possibly spend more time with patients. Today, much of the messages that my patients send to me are actually triaged by Medical Assistants, so I just need to see the ones that absolutely need my attention. That could be done by an ML program in the future.

I also am excited about patient-facing diagnostic tools. While at first pass, people see that as scary; however, inferior ones are already being used all of the time, such as the major search engines.

Ethics and privacy

How does privacy fit into all of this?

Privacy creates an issue for furthering ML research. It has lead to a proliferation of small datasets about people and in turn makes the work of ML researchers harder and possibly more biased. We will need more shared and open datasets that can be relevant to larger populations, but that comes into conflict with privacy and control.

How should the data be used? Who should or should not have access to it?

That’s a tricky question. I think there is a big push for patient-owned data, which has not been seen in medicine. Patients have access to their information, but it is hard to get. I think we will need to trade off convenience for privacy, and I don’t know how I feel about that. I do like mapping apps, but the app then knows where I am at all times. Generally speaking, people should need to consent into these things and data need to be de-identified. One other interesting question is if there should be compensation for the data from the person providing data.

Do you trust these ML tool developers to have access to these data? Why or why not?

Yes, unless those data start being used for ways that are negatively impacting people’s lives directly. Our banks have data about us. There is no data privacy in China. So, it is okay to me if other companies use healthcare data to improve the health of others.

ML knowledge and model explainability

At what level do you need to understand how the model makes its prediction?

It depends based on what I would be using it for. In medicine, there are a lot of things that we don’t know why they work, but we use them. However, we always try to continue understanding the biological mechanism. We as people like logical stories to phenomena, and we like to tell patients why a medicine or treatment works, even if they don’t quite understand it.

But for ML, it is difficult to extract all of the information. We will be basing decisions on outcomes — simply, if it works or not. When you are doing ML at the population level, it is okay to think about outcomes instead of mechanisms. But when you zoom in to the individual level when there is natural variation, then we need to explain the mechanisms to patients, their families, and sometimes even to lawyers.

I think about it this way: human clinicians make mistakes, and human drivers crash cars. Autonomous cars may crash too, but we want stories as to why these crashes happened. The same goes for medicine.

External validation needs

For you to be willing to use an ML tool, what external validation would you need to see? What types of government and/or non-government institutions would play a role?

It would need to at least match the diagnostic gold standard for a similar population with a large cohort. Initially, there can be no chance that it would miss something. False negatives are really not okay as something like this gets rolled out, especially for high risk cases. This will be different based on specialty. For dermatology, there are very few rashes that have life threatening outcomes. These can be quickly triaged out with ML in a way that both reduces pain and cost to the system. But for growths, there are higher medical risks, so ML is probably far away from being used.

I would also want to think about how insurance companies would bless these ML programs and consider reimbursements. ML programs that automate select tasks make a lot of sense in a value based care world, but that is 10 years away at least.

Lastly, there would probably be a certification created for things like this beyond the FDA — maybe something from medical societies. Certifications mean money for the certifying body. I had to pay a lot of money to get a lot of certifications in order to even touch a patient. So, it is very likely that these same organizations would get in the game of certifying ML tools.

--

--

Harry Goldberg
Building Trust and Adoption in Machine Learning in Healthcare

Beyond healthcare ML research, I spend time as a UC Berkeley MBA/MPH, WEF Global Shaper, Instant Pot & sous vide lover, yoga & meditation follower, and fiance.