Breaking up the Perfect Game
What if we were to evaluate physicians like we do baseball hitters?
What if we were to evaluate physicians like we do baseball hitters? By their batting average? Under this construct, if we consider a batter who bats 0.333 (that is, makes a hit one out of three times at bat) a success, might we consider a surgeon who performs the correct operation one out of three times to also be successful? Of course not you say. And so does Dr. Brian Goldman (a Canadian ER Doc) who posed this very question at the front end of a 2011 TEDx program. (The link is below.)
Dr. Goldman, using personal examples, eloquently argues that even though physicians are trained to bat 1.000, in actuality, this is impossible; and that attempting to achieve a 1.000 batting average may be as destructive as it is unachievable. Physicians are humans. Humans make mistakes. Medical mistakes will be made. But there are ways to mitigate them, and modern medicine is beginning to realize multi-faceted means of doing this –using tools such as checklists and decision support via the electronic health record, and nurturing a culture of psychological safety that allows all providers (not just doctors) to help catch errors before they occur.
In this respect, perhaps a different baseball analogy is more instructive. What if we thought of medical mistakes not in terms of an individual physician hitting 1.000, but instead of a team trying to prevent the perfect storm of events (lets call this a perfect game) that allow a medical error to come to fruition? In baseball, a perfect game is defined by a pitcher who gets out every batter he faces – 27 up and 27 down – no runs, hits, or errors. And while this past season witnessed three such performances (Philip Humber, Matt Cain and Felix Hernandez), they are in sum, quite rare. Since 1900, in fact, there have been only 21 perfect games in all of major league baseball. If you consider that, in the modern era, there are some 4,860 major league baseball games each year, this makes the perfect game (historically) far more rare than other uncommon events – like a plane crash or a total eclipse of the sun.
If we take the prevention of a perfect game as the model for where we would like to get with medical errors – we can start to think of team and system-based means of prevention. To illustrate, let’s take the hypothetical case of a patient presenting to an Emergency Department (ED) with an acute myocardial infraction (heart attack). In this analogy, it only takes one person, or one system out of many, to safely secure the diagnosis and break up the perfect game. So, hitting leadoff is the ED triage nurse - who greets a woman with severe right-arm pain. The nurse is fooled, and whiffs on the diagnosis – sending the patient to the back of the department with an “arm” complaint rather than as a “rule out cardiac.” The rooming nurse, the tech, and the medical history review section of the electronic health record also make outs – none of them identifying this woman as high risk for a heart attack. And so does the ER doc, the cardiac monitor and the risk profile algorithm embedded in the electronic record. And so it goes, the perfect “game” still intact (remember in this analogy we are trying to prevent a perfect game), until, on her second at bat, the primary nurse asks again about the patient’s arm pain – and learns that the patient’s sister had very similar pain when she was having a heart attack. And so, well before this has achieved a perfect storm of flukes and misses, this information has been passed on to the ER doc and the diagnosis has been safely secured. And then let the therapeutic rally begin.
Thus, we see here how in a situation in which multiple providers struck out, the system, through repetition and redundancy, was still able to prevent the perfect game. Perhaps, as such systems and risk profilers get refined, we can begin to think about moving beyond the perfect game model – to a medical system that (via the sum of all its parts) bats 1.000. Wouldn’t the lawyers love that?