The Problem with QALYs and the Even Bigger Problem without Them

Euconoclastic blog series

Stephen Casper, thestephencasper@gmail.com

Let’s say you’re in a remote village in which there is an outbreak of a disease that is equally dangerous to all who get it. You have a cooler with 100 doses of a vaccine and two options for giving them away.

  1. Giving the vaccines to 100 old people.
  2. Giving the vaccines to 100 children.

Ceteris paribus, (a great phrase to Google) what would you do? This isn’t a trick question. I think the answer is obvious, but let’s not get ahead of ourselves with this example (and potentially subject our conclusions to hasty intuition or bias). Instead, let’s back up a little.

Most everyone would agree to at least some degree with the utilitarian idea that moral good pertains to the wellbeing of conscious beings — that it’s good when happiness and pleasure are felt and bad when pain and suffering are felt. If we define good this way and we aim to achieve as much of it as we can, we have the challenge of optimizing for something difficult to measure. As of now, there is sadly no SI unit of happiness and no way to look into someone’s brain and quantify how morally desirable their qualia are. Happiness is difficult to estimate, and it’s harder still to apply estimations to policy or aid initiatives. Still though, when deciding whether to do action A or action B, we can do better than randomly choosing by using metrics that serve as proxies for happiness. Using the number of people helped wouldn’t be a terrible correlate, but giving two people a meal definitely isn’t as good as giving one person an entire education. The number of lives saved would be better still, but giving two old people each another painful year to live would certainly not be better than giving a single child another 70 years of happy, healthy life.

Allow me to introduce the Quality-Adjusted Life Year (QALY) — another terrible proxy for happiness, yet just about the best option we have. The QALY is attractive to a utilitarian: it says that the goodness of a life increases with both length and quality, and using QALYs to compare the values of different impacts is doable (though not extremely precise). QALYs generally tell us to prioritize cost-effective actions that target children, health, and poverty when we’re investing in development. Sounds pretty good, right? QALYs are far from a perfect metric, but how troublesome could the idea be that happy and long life is better than miserable and short life?

Very.

Our problem is that if what matters isn’t just life, but also long life and happy life, not everyone’s life is equally valuable, or at least not equally worth saving. Is there a slippery slope from using QALYs to immoral discrimination? Are alarm bells going off in your head? There probably should be. But bear with me as we look at whether or not this criticism is tenable, and then at what implications this should have for our priorities.

Nobody, whether they are prepared to accept QALYs as a metric or not, should be comfortable with the idea of judging some lives as being worth more than others. If you’re comfortable with this, I’m uncomfortable with you. Because history is filled with dark events in which one group of people viewed another as disposable, I think that erring on the side of equality is a good choice. But it would be a mistake to formalize this. Let’s say we adopt the idea that we should value all people’s lives perfectly equally. This leads to a set of conclusions that utilitarian reasoning (and even common intuition) would find very wrong.

  1. Consider the example we began with. Seeing all lives as having equal value would cause us to care exactly as much about an old person as a child. If we had 100 vaccines, and we only had those two options, we’d have to see both groups as equal in priority, shrug, and flip a coin.
  2. The perfect equality principle would cause us to value lives that are net-negative as much as net-positive ones. Note that by net-negative, I don’t mean shitty but with redeeming hope. I mean a life that is objectively not worth living — one that’s as bad and hopeless as you care to imagine — imagine constant torture and nothing else from birth to death. This would be worse than not existing and should not be positively valued.
  3. Our complete equality principle would tell us that people dying or being harmed is not bad. If all that makes life worth living is being alive, then it would not be morally undesirable to die or suffer. As long as the “had life” box were checked off, the moral imperative would be satisfied. Would you be disappointed tomorrow if you suddenly developed a very debilitating condition or died? Unless you would feel neutrally or positively toward that, there is no way to say that short life or life with more suffering are as desirable as long life or life with less suffering.

Feel free to hold onto sentiments — we’ll need those again soon, but the notion that we should value all lives equally should be out the window. There has to be more to life’s value than just life itself. But I think that some readers may agree with the three points above but still think that it would be wrong to let utilitarianism marginalize old, disabled, or miserable people. I won’t take a strong position and say that this is automatically illogical, but I don’t see how someone can coherently hold this position while still saying that there’s more to life’s value than life itself. If one decries the seeming inhumanity of saying that some lives are less important than others, it should be fair to point out the even greater inhumanity of the formalized principle behind that intuition: that we shouldn’t care about anyone’s life expectancy or actual welfare, just that they exist. But let me walk this back — I really agree with using equality and compassion heuristics — just with an asterisk.

Let’s consider again the fact that that there are, prima facie (another great phrase to Google), ingredients here for a slippery slope: if we say that some people’s lives are not as valuable as others, what’s to stop us from horrifying forms of inequality, discrimination, or eugenics? Where does it end? I think if we properly apply utilitarianism and handy heuristics for equality and compassion, it ends long before then. Remember not to object to utilitarianism on its own grounds. It mandates that we do as much good as possible, and if we think that could ever involve something like genocide, social castes, or large social disparities, we probably need to stop getting lost in inane hypotheticals. In what kind of world would removing or neglecting some group of people result in a true net positive? Can you really imagine this? (Cheap utopian babble isn’t allowed.) General equality is a really good principle to protect us from radicalism and the cooption of utilitarian concepts for the pursuit of tribal ends. I conclude that persecution or neglect of a certain class of people can almost certainly not be justified by utilitarianism in a world with any semblance to ours.

But about that asterisk I mentioned — heuristics don’t always work, and I’d like to bring up an uncomfortable hypothetical example, but one that I think shouldn’t be too taboo to consider. Let’s say that you were on a boat with two drowning people near you and a single life preserver to throw. How would you choose whom to throw to? Would you choose randomly, or would you try to see who stands to benefit more? Would you throw to an old person over a young person? A sick person over a healthy person? A disabled person over an able person?

This is a terrible thing to consider, and I have doubts about the pragmatism of even asking this. But this is a question that I don’t think we should ignore. We need to soberly (and sometimes somberly) consider different options when neither seems very good. And here is my core point: when we have more need in the world than we have the ability to satisfy, we have a moral obligation toward effectiveness, and that means intervening where our actions can do the most good. If person A has a greater quality of life or life expectancy than person B, ceteris paribus, we have an obligation to throw person A the preserver.

What’s the takeaway? Effectiveness matters. Our utilitarian imperative tells us to take a cause-neutral approach in deciding where to give our time and money. That means using QALYs, crunching the numbers, and keeping track of our counterfactuals. We’ll find that this generally leads us toward supporting cost-effective efforts to help children, poverty, health, and development.

The utilitarian principle isn’t one of callous discrimination, but pragmatism and equal considerations of the happiness of everyone. When it comes to where to throw our metaphorical life-preserver, we have to accept that there is not always a great answer. We can’t afford to be naïve or ignore taboo elephants in the room. That means asking ourselves difficult questions and making difficult choices, but doing the right thing isn’t always easy.