What a Trauma RN thinks about ML in healthcare

A summary of my interview with a Trauma Registered Nurse (RN). This is one of my 18 interviews with clinicians for my MPH capstone (link here) for UC Berkeley’s School of Public Health.

Visit my Building Trust and Adoption in Machine Learning in Healthcare site (link here) for the abridge and full version of my MPH capstone as well as upcoming summaries of interviews and additional research.

Note that this interview is a de-identified summary for ease of reading and privacy. It has been approved for distribution by the interviewee.

“Sometimes the computers actually do know more than we do, so it’s not the worst thing to have.”

Job Background

I am a Registered Nurse in the trauma departments of two hospitals within a health system. I expect that I will work at the patient bedside for my career — probably another 30 years — and not go into administration.

Familiarity with ML in healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

I have not ever heard the words “machine learning” nor “AI” in the realm of healthcare.

Past and future use

Have you used any ML tools? Would you?

I am not sure if it is ML, but there are pop ups in our EHR that flash while we see patients. They alert us when something bad is happening or might happen. For example, predicting sepsis is a thing in healthcare. There is a notification in our EHR that tells us when a patient’s vital signs may suggest sepsis risk, and we refer to that for most of our sepsis questions. Sometimes the computers actually do know more than we do, so it’s not the worst thing to have.

There are also critical vital sign notifications that alert us when things are wrong — either human-entered errors, or something is really wrong with a patient. We also have a notification for pneumonia, which takes in both basic demographics and vital signs, then suggests order sets. Again, I don’t know if this is ML.

Sometimes, I know I am being told the wrong recommendation, and there are a lot of steps to get rid of the notification — like one minute — which is a big deal in our trauma unit. Also, while some nurses think that these notifications are undermining, I have to say that there are multiple moments when I know they saved patients’ lives.

Notification use is tracked very carefully and there are consequences for not following them. For example, it is incriminating when they fire and you disagree and the person dies.

Excitement and concerns

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

These EHR notifications are great and if ML can do more things like this, then I am very excited for ML in healthcare. Older clinicians just don’t stay as well up to date on all of the clinical research and best practices. Also, there are just some not as skilled clinicians who need these assists.

It’s hard for me to say what I am worried about, since I don’t see it that much. Or maybe I do see it, but I don’t know. I am a little concerned about the computer giving us recommendations when I know they can’t pick up on everything that we see, hear, and think. For example, it is frustrating to see these notifications fire when it “sees” an increase to a patient’s breathing rate is clinically relevant, while I am standing in the room and actually see it is just because the patient is crying. Also, it is concerning to have a computer pick up things that you think a human should be capable of. People may relax their clinical judgement if they decide to depend on the computer for everything.

I don’t know how I would feel as a patient going into the ER and being triaged by a computer. There is an innate human quality of wanting to be cared for by a human. If there were a system that could triage without a human to see a patient, then people would be upset. When people are sick, they want an emotional connection with a human. They feel like they are the sickest person in the world and want someone to pay attention to them. If they saw a robot, they would think that they are not being taken care of.

ML knowledge and model explainability

At what level do you need to understand how the model makes its prediction?

Not that much. We are using this notification system, and we don’t know exactly what triggers them and whenever we ask, no one tells us — which is crazy. I asked the Tech Educator, and they didn’t know! But honestly, I haven’t pushed more, so I clearly don’t worry that much.

Desired use cases

Where are there opportunities to assist clinicians with ML? Imagine this: A world-class technology company, developed an ML tool that suggests possible diagnoses or triages a patient population. What is the best thing for them to build now and why?

We have a system that helps triage patients in the ER. It starts with simple questions: Will this person die? Are their vitals unstable? Are they going to need more than one clinical resource? And so on… This index is actually pretty confusing, and it is hard to label someone as needing a high or very high amount of clinical resources. You could see a lot more patients a lot faster, if you could more accurately choose between the different levels of triage. If there were a tool that more effectively figured out answers to the questions on the index, that would be great. Also, most ERs don’t want a new nurse doing triage. So, we have to be dependent on the limited number of very experienced nurses taking on these roles.

Also, a mass casualty triage tool would be very helpful. When there is a mass casualty event, we need to be more mobile and manage a huge influx of people. There is a lot of opportunity here for ML because in those moments, everyone needs to act like a seasoned nurse.

--

--

Harry Goldberg
Building Trust and Adoption in Machine Learning in Healthcare

Beyond healthcare ML research, I spend time as a UC Berkeley MBA/MPH, WEF Global Shaper, Instant Pot & sous vide lover, yoga & meditation follower, and fiance.