An Introduction to Decision Modeling

Ian David Moss
The Startup
Published in
10 min readJun 5, 2019

Despite their importance, we barely pay attention to most of the decisions we make. Fortunately, there’s a better way.

Photo by Franki Chamaki on Unsplash

Decision-making is life. Over time, our decisions carve an identity for ourselves and our organizations, and it is our decisions, more than anything else, that determine how we are remembered after we’re gone. Despite their importance, though, we barely pay attention to most of the decisions we make. Biology has programmed in us a powerful instinct to make decisions using our intuitions rather than our conscious selves whenever possible. There are good reasons for this; if we had to think about every little decision we made, we’d never get anything done. But for all its advantages, the worst thing about intuition is that it’s almost impossible for us to ignore — even when it’s clearly leading us astray.

Scientists have demonstrated that intuition is best suited to situations that we’ve seen hundreds or even thousands of times before — contexts where we’ve had a lot of practice and clear and accurate feedback on how well our previous decisions worked out. That’s great for decisions like how much to press the brake pedal when you see a stop sign coming up. The most important decisions in our lives, though, almost never fit this pattern. Their importance and high stakes almost by definition make them rare and unfamiliar, which is why many of us feel flummoxed in situations like these. Generally, we’ll respond in one of two ways. The more cautious among us are acutely aware of the stakes. Our anxiety levels go up, we turn to friends and colleagues for advice, and in organizational contexts, we schedule meeting after meeting in hopes of resolving the dilemma (or better yet, getting someone else to resolve it for us). Others of us confidently choose a path forward, but with a false certainty rooted in the fantasy that we understand our world better than we actually do. We avoid analysis paralysis, but greatly increase the chance of leading ourselves and others down the road to disaster.

Neither of these responses are much help to us in making better decisions, because neither of them address the core issue. Complex decisions require us to compare the likelihood and desirability of many possible futures on multiple, disparate, and often conflicting criteria. That’s something our intuitions just aren’t naturally equipped to do. So long as our decision-making strategies don’t address this core problem, they are doomed to fail us more often than we’d like.

Thankfully, there is a better way. The secret to resolving complex, risky dilemmas with justified ease and confidence is to model your decisions explicitly. Our intuitions aren’t able to do this on their own, but fortunately, modern computing technology is more than up to the task. That’s why I like to think of decision modeling as a kind of technology-enhanced decision-making. Unlike with full-on artificial intelligence, we are not asking computers to make our decisions for us, which would require us to trust an algorithm we may not fully understand. Rather, we are leveraging the power of computers to do what we humans can’t do well, freeing our minds to concentrate on what we’re actually good at. At its best, modeling our decisions can help us make the very human exercise of decision-making not only more likely to lead to the outcomes we want, but more instinctively satisfying as well.

Vax to the Max: A Grantmaking Case Study

So how does it work? Let’s say you run a grant program and you’re deciding whether or not to approve a grant proposal. To keep things simple for this example (don’t worry, I’ll get to more complicated applications later), we’ll assume that there’s only one goal of your program at this particular moment: to deliver life-saving vaccines. Most of the organizations currently in your grant portfolio focus on direct service delivery, doing good work but at modest scale. But the prospective applicant in front of you — let’s call them Vax to the Max — is proposing an intriguing new strategy, one that offers tremendous upside: advocacy. By getting the government involved to provide appropriate incentives and funding, the theory goes, the project could usher in a new wave of vaccinations that no current grantee is able to promise under the existing system.

Vax to the Max’s grant proposal claims that this new strategy will result in 50,000 new vaccinations. Should you take that number at face value? The answer is probably not. For one thing, of course, the applicant has a strong incentive to provide you with an optimistic picture of its projected impact. But even assuming that estimate isn’t biased at all, there’s another problem, which is that it’s just one number. To really do modeling right, we need to think in terms of the probabilities of different outcomes. Sure, there could be 50,000 vaccinations…but one could easily imagine 25,000 or 40,000 or maybe even 60,000 instead. It’s impossible to know for sure in advance, so we have no choice but to do some guesswork.

Specifically, to get a handle on all these possibilities, we want to estimate a confidence interval for the number of new vaccinations. For this example, we’ll use a 90% confidence interval — i.e., you think it’s 95% likely that the true number of new vaccinations will be above some amount and 95% likely that it will be below some other amount. You can (and should) train yourself to get good at these kinds of estimates via a fun mental exercise called calibrated probability assessment, or calibration for short. But for a first approximation, try asking yourself this question: what is the biggest (or smallest number) I could imagine that’s still technically possible?

Let’s say you’ve done that exercise and determined that you’re 90% sure the number of new vaccinations made possible by the policy changes, if enacted, is between 100 and 60,000. That’s a huge range! But this is the sort of thing that’s genuinely really hard to predict, so we want to be careful not to be overconfident.

You’ll notice in the screenshot that there’s an image of something that looks like a lopsided bell curve on the bottom right. That’s because the software I’m using (Guesstimate) calculates a Monte Carlo simulation for this estimate right there in the model. Monte Carlo simulation is a statistical technique that randomly generates thousands of scenarios from the information you feed the model. Originally developed by nuclear physicists, it’s now used to aid decision-making in everything from politics to sports and beyond. For our purposes, you can think of a Monte Carlo simulation as a sampling of the possible future lives that might unfold for you and your organization as a result of your decision. The number in large font (16K) is the average of the values across all of the simulations.

Woohoo, 16,000 new vaccinations! But hold up — there are some other things we need to take into account here. For one thing, you’ve never worked with this organization before, and let’s just say you have less than complete confidence that its leaders can follow through on their commitments. Perhaps more importantly, this a complex space you’re all working in. Even if Vax to the Max does a brilliant job executing on its strategy, there’s no guarantee that it will actually result in any policy changes. And if the changes are enacted, it might not be because of anything Vax to the Max did — perhaps another organization’s work or broader cultural shifts will have been more decisive factors.

Let’s put all of this into the model. To capture the contribution Vax to the Max would make to the advocacy effort, we can estimate the likelihood of the new policies being enacted with a faithful execution of the proposed strategy and without that execution. Thus, we are defining the impact of Vax to the Max’s work as the increase in the odds of those policies coming to fruition if it follows through on its commitments — in this case, a doubling of those odds from 5% to 10%. We can further estimate the probability that Vax to the Max will follow through on its strategy as described. (We’ll assume for now that they’ll only attempt the project if you fund their proposal in full.)

Putting all of this together results in an estimate of 470 new vaccinations, on average, as a direct result of funding the proposal. That’s a lot less than 16,000, but at least it’s more than zero!

We’re not quite done, though, because if you don’t fund this proposal, it’s not like the money you would have spent on it goes away. You’ll still have it available to you and you could do something else with it instead. So what would that be?

Here’s where it’s a really good idea to have a sense of what your “default” option is. In this case, perhaps that means offering another round of funding to one of your current grantees that’s up for renewal. Let’s call these folks Maxine’s Vaccines. They’re not one of your star performers — you wouldn’t be thinking about dropping them from the portfolio if they were — but they do solid, reliable work that contributes in an incremental way to the goals of your program. You are one of their biggest funders, so failing to renew the grant could definitely force the organization to cut back its activities, though it’s possible its leaders could find a way to replace the funding.

Okay, so we need a variable for the vaccinations that Maxine’s Vaccines would be able to deliver with the help of a renewal grant. We should also estimate the chance that they might be able to persuade another donor to fill the gap if the grant is not renewed. Finally, similar to the last example, we should also estimate what would happen if Maxine’s Vaccines does not get the grant and cannot fill the gap. Would they shut down the organization or the vaccination program entirely? Maybe not. Lots of organizations when faced with financial difficulties will choose to scale down rather than close up shop entirely, especially when there are still committed sources of funding. So that uncertainty should be reflected in our estimates as well.

Which grant opportunity is likely to result in the most vaccinations? It’s not immediately obvious, and if you were trying to make this call intuitively it would have to involve a lot of guesswork. Fortunately, this is the sort of situation where modeling the problem can make things a lot easier.

With the information we’ve put into the model so far, we now have an estimate of the number of new vaccinations from the two options to compare side by side — the modeling moment of truth. Maybe it’s just because I’m a huge nerd, but for me this is the most magical part of building a decision model. There’s a visceral, “that’s so fucking cool!” excitement in seeing the big reveal, because unlike with many research and analysis projects, this technique actually gives you a direct and straightforward answer to the question foremost on a decision-maker’s mind: what should I do next?

As it turns out, with the assumptions we’ve given it, the model thinks your next move should be to call up Maxine’s Vaccines to tell them you’re renewing their grant. Vax to the Max has a compelling story to offer, but the cumulative impact of the question marks means that funding them will most likely mean fewer people will be vaccinated overall.

Here’s the full, live version of the model if you’d like to play with it further. Note that the model is re-run with new simulations each time you open it, so the numbers may be slightly different from the screenshots above.

Now, is this the end of the story? It depends. If you feel comfortable making the decision with the information you have available, that’s fine. Just breaking down the situation concretely like this is already a big improvement over trying to eyeball your way through it. But the real potential of this method lies in the fact that, if the stakes are high enough, you can use the model to help you come up with targeted research strategies to try to narrow your range of uncertainty for some of these variables so that you can move forward even more confidently. We’ll talk about how to do that in another installment.

So there you have it! I should note that I intentionally kept this decision model pretty basic for the sake of clarity, so if you noticed things about it that seem incomplete or not totally true-to-life, that’s probably why. For instance, we could have contemplated a multi-year time span, optimized for more than one goal, worked with objectives that are harder to quantify and measure, looked at different types of probability distributions, and more. I’ll try to cover some of these ideas in future articles, but in general a good rule of thumb is that if your model isn’t sophisticated enough to do the job, there’s probably a lot you can do to improve it that you may not have thought about. It may well be the case that you’ll get more mileage from keeping at it than just giving up and making the decision the old way.

In the meantime, hopefully this gives you a taste of what’s possible with this kind of methodology, and why it can be so helpful in situations where our intuitions aren’t giving us a clear answer. For complex dilemmas, decision modeling allows for much more accurate estimates of how all the different factors are likely to interact with one another, enabling you to transcend the limitations of your intuition. And it also reminds us that decision-making is an exercise in navigating uncertainty, and while we’ll never be able to rid ourselves of that uncertainty altogether, there are tools available to us to smooth the journey.

Ian David Moss works with foundations, donors, investors, and government agencies to increase their impact. Sign up for his newsletter, To Be Decided, here.