How to Make Hospital Tech Much, Much Safer

We identified the root causes of Pablo Garcia’s 39-fold overdose — and ways to avoid them next time.

Robert Wachter, MD
Backchannel
Published in
12 min readApr 3, 2015

--

This is part 5 of The Overdose. Read part 1, part 2, part 3 and part 4.

I first learned of Pablo Garcia’s 39-fold overdose of an antibiotic at a meeting at my hospital, held on July 26, 2013, a few weeks after the error itself. When I heard that the inciting event was a simple oversight, I was concerned, but not overly so.

A physician had placed an electronic order for Pablo’s antibiotic, Septra, but had failed to notice that the order screen was set to calculate the amount of drug based on the weight of the patient, not to accept the total milligrams intended for the whole dose. The software was set to expect a dose in mg/kg, but the doctor assumed it was set for mg. So when the clinician entered the total dose, 160, the computer multipled that dose by Pablo’s weight.

This kind of simple oversight is all too familiar to me, both as a clinician and a student of patient safety. Surely, I thought, as the facts of the case began to unspool at the meeting, the combination of smart people and modern technology would catch the problem before it reached the patient, making it a near miss.

But it was soon clear that no person — and no computer — had made such a catch. First, the doctor bypassed the computerized alert. Then the pharmacist missed the error and a different alert. Uh oh. And then the pharmacy robot dutifully fetched more than three dozen pills. Finally came the denouement: a young nurse, working on an unfamiliar floor, too busy and intimidated to speak up and falsely reassured by the dazzling technology, actually gave a 16-year-old boy 38½ pills rather than the single tablet he was supposed to get.

By now, my jaw was somewhere on the floor. I was amazed that this could happen in one of America’s top hospitals, equipped with the best healthcare information technology that money can buy.

It was then that I knew I needed to write a book about technology in medicine, and that the book had to have the word “Harm” somewhere in the title.

Root cause analysis, or RCA, is the technique we use to analyze errors in healthcare deeply. Although RCAs have been been a staple of industries such as commercial aviation (it’s what National Transportation Safety Board investigators do after a plane crash) and the military for generations, we in medicine have only been conducting them for the past 15 years or so.

In keeping with James Reason’s Swiss cheese model of errors, the goal of an RCA is to concentrate on system flaws. Reason’s insight, drawn mainly from studying errors outside of healthcare, was that trying to prevent mistakes by admonishing people to be more careful is unproductive and largely futile, akin to trying to sidestep the law of gravity.

Reason’s model recognizes that most errors are committed by good, careful people, and to make things safer, we need to focus instead on the protective layers — which, when working correctly, block human glitches from causing harm.

These layers all have inevitable gaps, which remind him of stacked slices of Swiss cheese. Because breaches of these layers create the risk — whether it’s a crashing plane, a nuclear power plant meltdown, the failure to catch 19 terrorists in the days before 9/11, or a medical mistake — the goal of a safety program is to prevent the holes in the cheese from lining up.

After sitting through a few RCAs, people tend to gravitate to a favorite fix. Some see most medical errors as communication problems, which leads them to suggest changes that will improve teamwork and information exchange. Others focus on the workforce — they typically feel that overwhelmed, or distracted, or tired clinicians are at the root of many errors. Still others see problems as failures of leadership, or of training.

Until computers entered the world of healthcare, most of us viewed information technology as a solution, and a powerful one at that.

Take the problem of an error due to a doctor’s indecipherable handwriting. The solution seemed obvious: computerized prescribing. An error due to a mistaken decimal point or mg vs. mg/kg dosing mix-up: computerized alerts. An error due to the nurse giving the wrong medication to a patient: bar coding.

Though computers certainly can be a solution to many kinds of medical mistakes, they can also be a cause. In January 2015, a team of Harvard investigators published the results of a study of 1.04 million errors reported to a large medication error database between 2003 and 2010. They found that 63,040, fully 6 percent of all errors, were related to problems with computerized prescribing.

The error that nearly killed Pablo Garcia illustrates the double-edged sword of healthcare IT. It also demonstrates that — even in errors that primarily relate to computerized systems and human-technology interfaces — the solutions need to be broadly based, addressing several different layers of Swiss cheese. Finally, it shows us how hard it is to fix even seemingly easy problems in healthcare when they relate to technology.

The RCA in the Pablo Garcia case did identify many problems with the system, and over the subsequent months, UCSF Medical Center set out to address them. One thing we did not do was fire any of the involved clinicians. The review showed them to be solid employees who were acting on the information they had available. In the end, the test we use in such situations is this: Could we imagine another competent individual making the same mistake under the same conditions? When we analyzed the actions of the doctor, the pharmacist, and even the nurse, we felt that the answer was “yes.” Each was counseled, but all were allowed to return to work. To their credit, all three allowed me to interview them for the book, in the hopes that their recollections and insights might help prevent a similar error in the future.

We began to scrutinize some of the system problems that we felt were responsible for the error. We re-examined the policy that mandated that Pablo Garcia’s dose be written in milligrams per kilograms, instead of the “one double strength twice daily” that his physician knew he had been on for years. Since that fix only involved a revision of a policy, it was remedied quickly — clinicians no longer are required to use weight-based dosing when they know the correct dose in milligrams.

Dealing with the problem of too many alerts proved harder, partly because
it flies in the face of intuition.

At one of many discussions, someone said, “I think we need to build in just one more alert here.” I was aghast. “Don’t you see…” I fairly shouted. “The problem is that we have too many alerts. Adding another only makes it worse!”

To tackle this one, we formed a committee to review all of our alerts, pruning them one by one. This is painstaking work, the digital equivalent of weeding the lawn, and even after two years, we have succeeded in removing only about 30 percent of the alerts from the system. Making a bigger dent in the alert problem is going to require more sophisticated analytics that can signal, in real time, that this particular alert should not fire, because, in this particular situation, with this particular patient, it is overwhelmingly likely to be a false positive. We’re not there yet. Neither is Epic, the company that sold us our electronic health record system — or anyone else, for that matter. But the need to address this problem is pressing.

We’ve changed other things, too. Our computerized prescribing system will now block any effort to prescribe more than nine pills in a single dose. As with so many of the solutions, creating “hard stops” like this seems like a no-brainer, yet proved to be surprisingly complex. What if a patient is on 20 mg of morphine and the pharmacy is out of 10 mg pills, with only 2 mg pills in stock? The 9-pill maximum solution — the only fix that was technically feasible within Epic — would block the computer from dispensing ten 2-mg morphine tablets, perhaps forcing a patient to wait in pain while the physician or pharmacist jumped through bureaucratic hoops to override the block.

But not every problem can be fixed in-house. Some issues can only be fixed by outside software engineers — in our case, the ones sitting at Epic’s massive headquarters in Verona, Wisconsin. Even then, the company only makes such revisions available to all of their clients in the course of periodic software updates, which come perhaps once or twice a year. Because most health IT systems are not cloud-based, they lack the ability to push out a rapid update, the way we’re all used to doing on our smartphones and tablets.

There have been calls for a national clearinghouse for IT-related safety issues, and this seems like a good idea to me. Such a clearinghouse would at least offer a fighting chance that someone will identify a pattern of computer-related errors and that the users and vendors are aware of it. But such a central repository will need to have some teeth if it is to be effective.

The technology fixes are important. But preventing the next Septra overdose will take efforts that focus on problems far beyond the technology itself, on the other layers of Swiss cheese. For example, the error by the pharmacist owed, at least in part, to the conditions in the satellite pharmacy, including the cramped space and frequent distractions. The satellite pharmacists now work in a better space, and there have been efforts to protect the pharmacist who is managing the alerts from answering the phone and the door.

We also needed to address another problem that is not limited to healthcare: overtrust in the technology. As Captain Sullenberger, the “Miracle on the Hudson” pilot, told me, aviation faces a similar need to balance trust in the machine and human instinct. The fact that today’s cockpit technology is so reliable means that pilots tend to defer to the computer. “But we need to be capable of independent critical thought,” Sully said. “We need to do reasonableness tests on whatever the situation is. You know, is that enough fuel for this flight? Does the airplane really weigh that much, or is it more or less? Are these takeoff speeds reasonable for this weight on this runway? Everything should make sense.”

The decision whether to question an unusual order in the computer is not simply about trust in the machines. It’s also about the culture of the organization.

Safe organizations relentlessly promote a “stop the line” culture, in which every employee knows that she must speak up — not only when she’s sure that something is wrong, but also when she’s not sure it’s right. Organizations that create such a culture do so by focusing on it relentlessly and seeing it as a central job of leaders. No one should ever have to worry about looking dumb for speaking up, whether she’s questioning a directive from a senior surgeon or an order in the computer.

How will an organization know when it has created such a culture? My test involves the following scenario: A young nurse, not unlike Brooke Levitt, sees a medication order that makes her uncomfortable, but she can’t quite pinpoint the reason. She feels the pressure to, as the Nike ad goes, “just do it,” but she trusts her instinct and chooses to stop the line, despite the computer’s “You’re 30 Minutes Late” flag, her own concerns about “bothering” her supervisor, or perhaps even waking an on-call doctor. And here’s the rub: the medication order was actually correct.

The measure of a safe organization is not whether a person who makes a great catch gets a thank-you note from the CEO. Rather, it’s whether the person who decides to stop the line still receives that note . . . when there wasn’t an error.

Unless the organization is fully supportive of that person, it will never be completely safe, no matter how good its technology.

The importance of speaking up extends beyond the recognition of individual errors to more general complaints about the design of the software. If front-line clinicians are ostracized, marginalized or dismissed as Luddites when they speak up about technology-related hazards, progress will remain sluggish. Similarly, if hospitals remain quiet about cases such as the Septra overdose, we are doomed to keep repeating the same mistakes. As you might guess, silence is the way such errors are usually handled, for all sorts of reasons: fear of lawsuits, worry about reputation, plain old shame.

After hearing about this case, I asked the senior leaders at UCSF Medical Center for permission to write about it, and to approach the involved clinicians as well as Pablo Garcia and his mother. Quite understandably, many people were reluctant at first to air the case. Sure, let’s discuss it in our own meetings, maybe even present it at a grand rounds or two. But going public — well, that would just be inviting trouble, from regulators, lawyers, the software vendor, the University’s regents — to say nothing of the impact on our reputation.

On December 11, 2013, we were discussing the case at a safety meeting at UCSF. I presented some of my recommendations, as did many of the hospital’s leaders. I had made my request to use the case in my book, but I hadn’t yet heard back, and assumed it was making its way through the various layers of the organization. As it happened, the final arbiter — medical center CEO Mark Laret — was sitting across the table from me.

Laret’s job is exquisitely tough, and politically charged: to ensure the 8,000 employees of the massive health system deliver safe, high quality, satisfying care during nearly 1 million patient encounters each year — while dealing effectively with unions, donors, newspaper reporters, and managing the odd Ebola outbreak or scandal that inevitably pops up from time to time. Running the $2 billion operation is a daily tightrope act, and Laret is superb at it.

The average longevity of hospital CEOs is a few years, but Laret has been in his position for 15, and that takes a level of political acuity — and risk aversion — that made me worry that he would say no to my request. As I was thinking all of this, my iPhone buzzed. It was an email from Laret. I looked up at him, and ever so briefly we made eye contact. Then I looked at my phone, and read his note: “I agree that this really needs to be published.”

This is excerpted from The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, by Robert Wachter. McGraw-Hill, 2015. You can buy the book here.

Illustrated by Lisk Feng
Follow Backchannel: Twitter | Facebook

--

--

Robert Wachter, MD
Backchannel

Professor & Chair, Dept of Medicine, UCSF. What happens when poli science major becomes an academic physician. Thinks/writes on digital, quality, safety, Covid.