Omnicell electronic medication cabinet (Source: https://www.omnicell.com/products/omnicell-xt-automated-dispensing-cabinets)

How responsible are UX researchers for a real-life user experience and consequences?

Janine Kim
Bootcamp
Published in
4 min readMar 25, 2022

--

A news article this week from npr caught my eye- a nurse was charged with reckless homicide and felony abuse after making an error that resulted in a person’s death. They gave the wrong drug- “Vecuronium Bromide” instead of “Versed”.

They used an Automated Dispensing Cabinet (ADC), an electronic medication cabinet that is locked until the nurse enters data (i.e. user identification, name of item requested) on the digital screen. Once data is entered, only selected drawer(s) open, giving the nurse access to selected items. Each transaction is recorded electronically. (Information source here.)

In this case, the nurse initiated a system override by typing in ‘VE’ on the screen, and Vecuronium popped up instead of Versed because Versed is a brand name drug. The generic name for Versed is ‘Midazolam’. The nurse saw five error messages and overrode each one (see below.)

Source: https://www.documentcloud.org/documents/6785652-RaDonda-Vaught-DA-Discovery

Case results have not been made yet, but the investigative report (see below) shows the nurse believed it to be their fault — that they “fucked up”.

Source: https://www.documentcloud.org/documents/6785652-RaDonda-Vaught-DA-Discovery

That last sentence made me think of a common phrase I hear at work: “It’s never the user’s fault.”

I couldn’t help but think- if the system was better made for the user (the nurses), would the outcome have been different? Would a person not have died? Would the nurse never have been charged and had their license revoked?

To be sure, I don’t know the answer for this particular case.

But it struck me that none of the articles I found about this case pointed out any possibility of responsibility from the product itself — the electronic medication cabinet. All articles alluded only to the possible fault of the nurse and the hospital.

The manufacturers for the electronic medication cabinet and connected display system definitely conducted user testing for the product before release because all medical hardware/software are bound under IEC (International Electronics Commission) to show there are 0 safety issues before the product is approved to sell. They definitely foresaw a safety issue like this potentially happening, because as seen above in the court documents, the system had built-in automatic pop-up screens and warning messages.

However, I wonder if the researchers had done enough to ensure the designers, PMs, and engineers on their team could fully empathize with the end users- the nurses. According to medical experts, it is “common for nurses to use an override to obtain medication in a hospital”. If this is a common workaround, I wonder if the researchers had probed users enough about potential related risk scenarios. Did the researchers conduct contextual inquiries, where they could have followed nurses around in their job to probe in the moment about the steps they take in a medication override? Did they research enough users to uncover worst case scenarios like Vaught’s? Did they do foundational research by conducting in-depth interviews on nurses, and then create and use user personas, which are meant to help designers empthasize with their end user?

Did the researchers keep in mind ‘alert fatigue’, where users naturally tend to dismiss or simply not even mentally register alerts and warning messages if they are exposed to them too frequently? To my knowledge, the IEC requirements don’t speak to this phenomenon. If it had, would this tragedy have occurred? Are we punishing the user solely when the responsibility should be shared?

Then I wonder if I am doing enough in my job to ensure the designers, PMs and engineers on my team are able to fully empathize with end users’ real life constraints and the implications these could have on use of our products. My research doesn’t require us to conduct contextual inquiries, although we do in-person testing with real machines. We have user personas, but I haven’t seen any designers refer to them in their work process.

In response to this case, the manufacturers (which were unnamed in the article), modified the software so that nurses would be required to type in five letters, not two, in order for drugs to pop up. However, not all hospitals have implemented this safeguard, which means this scenario could very much happen again.

And in response to this case, as a UX researcher, I know the serious implications my work could have on real patients, radiation therapists, etc. I, along with fellow designers, engineers, PMs who are responsible for the overall user experience, probably won’t ever be held responsible legally (like Vaught was) for a bad user experience as long as our product was “proven” that it passed tests designed to match IEC regulations. But is it right that the end user is, and the makers of the user experience, are not?

--

--

Janine Kim
Bootcamp

User Experience Researcher – Discovering deep insights for a better world