# The Uncertainty Tax: the financial cost of not knowing things

Let’s begin with the well-known phrase:

Knowledge is power

As a youngster who liked learning many things, I once thought that this boded well for me (more on how that turned out soon). We also have a similar phrase:

Money is power

The goal of this article is to put these together and explore if knowledge is money? That is, to ask if there is any quantitative relationship between knowledge (information) and money (or other resources)? And, conversely, what is the cost associated with a lack of knowledge?

Well, it kind of seems reasonable that knowledge might lead to money. After all, anyone who is smart, inquisitive and knowledgeable will also become rich and powerful, right? Well, perhaps not, at least in my observations and personal experiences… but there should be some causal relationship between good decision making based on available information, and good outcomes.

To be more specific about where we’re heading, we are going to see how we can use mathematics to determine the extent to which lack of knowledge might cost us money in certain situations, and how this might be useful in practice. I will further argue that when you don’t know what you don’t know, this ignorance leads to non-ideal decisions and it costs you real money.

### Why am I writing about this?

I grew up with a fascination of physics, maths and computers, and to be honest earlier on I didn’t pay a lot of heed to human constructs like “money”. Pfft. In a somewhat contradictory fashion, I was always interested in the way humans interact with systems and physical resources, and thus economics in general, though I never studied it formally.

Over the years as a physicist, there was one reoccurring theme which always struck me as way too powerful for what it first seems — information theory. For example, much about macroscopic physics can be intuited from arguments around entropy and information without knowing the microscopic details. Information theory tells us about how much data can be physically sent down a channel, and how we can encode this information to protect it from errors or eavesdropping. The quantum uncertainty principle is basically the analog version of the Nyquist theorem, and quantum information theory explores the boundaries of what is physically possible in our universe. As mere mortal actors in the physical world, we are always exposed to uncertainty and have to make decisions using limited understanding, and we learn new things through e.g. a Bayesian approach. While information theory is validly said to be a branch of mathematics or computer science, it is also so clearly integral to how we interact with our physical world that I would certainly call it “physics”.

These days, I work in private industry, distilling information from large-scale geospatial data and delivering information to infrastructure owners which they can act (or not) upon. Our clients essentially use this information to save money — to assess risk, remedy dangerous situations, improve quality-of-service, avoid unnecessary maintenance, and so on. Once again: we provide information, and they save money.

Naturally, as a mathematically inclined person, I’m lead to wonder what kinds of general mathematical statements can be made about the relationship between information and financial outcomes. I’ve also decided to write this as a blog post (essentially my first) — not because I’m an expert, or have done any scholarly investigation of the state academic knowledge on this topic, or even have the slightest business experience dealing with risk and uncertainty. But because in my naive observation many decisions, big and small, in business and otherwise, and in the face of uncertainty, appear to be taken based on “gut feeling”, “instinct” and “business sense”, and I’d like to have a conversation about that. And because by saying something, you often learn — through the experience of explaining it, and through the feedback of your audience. I’m particularly interested in learning whether any of the simple back-of-the-envelope techniques below are actually applied in practice in the real world (business and politics), and what the state-of-the-art here is on the theoretical (and computational) front.

### The price of not knowing

I will now discuss a very simplistic situation — a toy model, if you will — where we can clearly see how not knowing something will, on average, cost us an amount of money which we can calculate.

#### Choosing between a “known thing” and an “unknown thing”

Let’s say you have the choice between A and B. Perhaps A and B are two models of car that you might purchase, or the two latest flagship smartphones, or two suppliers your business is looking at signing a contract with, but it doesn’t really matter what.

For whatever reason, you have access to a lot of information about A and less information about B, while both appear to be attractive options. Perhaps A is something you’ve used before, but not B. Perhaps A has been on the market a while and is well understood and reviewed, but B is so new that the internet kiddies haven’t blogged about it yet and it doesn’t appear in Consumer Reports.

In either case, you’re imagining one of two possible futures: in the first situation you pay some money for A, and get some return on investment (ROI), and in the second you pay some money for B and you get some other ROI. You aren’t considering purchasing both, or neither. (Here, the ROI is a simple financial measure of how much utility or benefit you get out of the purchase; this item may either generate revenue or save on expenditure or provide some other benefit which can be captured financially).

Because A is well understood, we are very confident its ROI will be x. In our toy model, this will be a certainty. On the other hand, the ROI on B is unknown to the purchaser. Because this is my toy model, I’ll assert as fact that there is a 50% chance the ROI on B is x+y and a 50% chance that the ROI on B is x-y.

Thus, without any further knowledge about which possibility holds for B, we can expect that on average B will also return an ROI of x, just like A. Obviously B is a more risky choice, and there are valid reasons for taking on or avoiding risk, but for the simplicity of the toy model we’ll simply consider the average value, also called the expected value. For both A and B the expected ROI is x.

Now imagine some time passes and some more information is revealed. Perhaps we’ve already purchased one B and have studied its worth, and are considering whether to buy a second A or B. Perhaps the internet kiddies have released 1000 blogs, unboxings, and destruction tests, and Consumer Reports has done an in-depth article directly comparing A to B.

Sweet. Now we know the ROI of B, we can do the calculation again. There’s a 50% chance the reviews were positive, in which case the ROI was revealed to be x+y, and being a savvy business person we opt for B and receive an ROI of x+y. There’s also a 50% chance that Consumer Reports was not so keen, and we learn the ROI of B is only x-y. Of course, we use our extreme intelligence and decide to purchase A with a superior ROI of x.

On average, the expected ROI after learning the information is x+0.5y. We can now put a direct price on this one bit of information. Knowing more about B means that we can expect a greater return to the tune of 0.5y! Conversely, not knowing this information implicitly costs us 0.5y. This is what I refer to in the title of this blog as the “uncertainty tax” —literally, how much you pay the universe for your ignorance.

#### Practical implications of the “uncertainty tax”

So, what does this mean? Possibly many things, but I would first highlight that we can profit (on average) from spending any amount of money up to 0.5y in order to study and learn about B before making our purchase decision. A consumer may simply decide to wait. A business could devote time of employees to research, or hire consultants, or else rent or trial a B, before making the decision. If 0.5y is a very large sum of money, it may even be worth creating a well-funded team of 100 people just to reduce the uncertainty! This effort and time will, of course, have to be offset against other constraints and realities of the business/individual, but given all of this it still may be best to take action to reduce the uncertainty.

However, the perverse situation that can (and does!) occur is when people don’t know what they don’t know. In being naive, in not doing the above analysis, in making a quick decision, and in choosing to “use your gut” and make a decision, you limit your options. You lose the opportunity to address weaknesses in your knowledge. You might stumble into avoidable situations. And, as shown above, you can expect to lose real money!

Maybe the above situation seems familiar to some of you reading this. Perhaps you’ve been guilty of this — been hasty and lived to regret it. I’ve seen colleagues and friends despair at the “non-scientific” nature of decisions made by those above them. And I hear what you are all saying — “Isn’t being ignorant, naive or stupid really frickin’ expensive?” As a scientist, I can’t imagine the difficulty of the situation our business leaders and governments face everyday, where individuals have to make important and far-ranging decisions based on limited information. However, I do feel I can demand that leaders seek out appropriate information before making decisions that impact me. I also wonder if us, as technical people, can help by providing the tools and techniques to do the job properly.

#### More realistic situations

Above I only covered a very simplistic toy model situation. In reality, one will face more complex situations involving:

• More than two choices (the decision surface may be a continuous, high dimensional manifold).
• Uncertainty involves more than just two realities of equal probability. In fact, we might not even have a very accurate model for the uncertainty.
• Situations that involve (approximate!) modelling of large, complex systems.
• More complex objectives than the expected ROI, taking into account things such as risk aversion.
• Known or unknown temporal variations in the potential opportunity, risk, ROI, etc, which might impact the optimal time to make decisions.
• Reacting to information gathered over time — perhaps we could buy a few B’s before deciding whether to make a bulk order of A’s or B’s. Perhaps when we’ve researched B enough, we would gain more by putting further effort into understanding A better instead. Perhaps we could sell our dud B at a small loss, and replace it with an A (hopefully before the rest of the market has learned the true value of B).

I won’t attempt to address any of these issues here, but even with all this complication the fact remains that greater information will, statistically speaking, lead to better outcomes.

#### Upper bound on the expected value of information

There is one more (relatively self-evident) fact worth mentioning. Given we have thought about what we are ignorant of, we can then imagine the best-case and worst-case scenarios.

This can lead us to an upper bound to the expected worth of the information. Independent of the details of all the possible situations and their likelihoods between the best and worst cases, we know that having more information can’t be worth any more to us than the difference in our positions in the best case and the worst case. This is basically an inequality on the value of information:

value of information ≤ ROI(best case) — ROI(worst case)

I can imagine that such a simple tool could be used in practice. One practical use is to know when learning more information is not worth pursuing, e.g. when the difference between best and worst cases is negligible, or when we already know the cost of obtaining the information is greater than this difference. It’s a pruning algorithm that let’s us bail early on the analysis, without feeling guilty.

(Of course, it wouldn’t be useful in all situations — if the worst case is my startup goes bankrupt, and the best case is my startup is a unicorn and I become the richest man on earth, we couldn’t learn much from this inequality).

#### Back-of-the-envelope calculations

Even in more complex situations, it may be possible to invoke some back-of-the-envelope calculations to get an order-of-magnitude calculation of the “uncertainty tax”. My thinking is that this rough estimate might be sufficient to make a decision between canvassing more information, and pulling the trigger now.

Below are some questions that interest me, and are left as exercises to the reader to illustrate my point (warning — the questions are intentionally “fluffy” and may contain traps or lack information you will have to estimate yourself, but they should all be feasible):

1. I would like to buy a quality digital camera, but I can’t quite choose between a particular DSLR and a mirrorless. Both models I’m interested in cost \$2000. I’m worried that there’s a 20% chance I’ll end up with a camera that I never use because I don’t like it, and therefore have a \$2000 device which is essentially useless to me. I value my spare time at roughly \$25/hour. On financial reasons alone, how much time could I reasonably spend on doing research before making my decision?
2. A surveying business is looking at buying another \$1million sensor platform, and the manufacturer has just introduced a new model for the same price. Theoretically, the new model should allow the business to perform surveys cheaper, saving \$100,000 each year over a 5 year lifespan, compared to the old model. However, it is currently unknown if the sensor is compatible with the current system and the engineers worry that if it isn’t, it could take up to 6 months to integrate properly, leading to costs of up to \$250,000 in lost revenue and integration costs. Should the business buy the new sensor, the old sensor, or spend some resources on evaluating compatibility?
3. The FDA is asked for approval for a new drug B to replace drug A. Drug A is an important drug and in total represents a 1 billion dollar market. While drug B is half the cost of A, it’s efficacy is less well known — studies suggest with 95% confidence that it actually works. What are some strategies the FDA could take for the benefit of society? On purely financial grounds, roughly how much money might we consider worthwhile for society (as a whole) to spend undertaking further trials to lift confidence in drug B to an “acceptable” level, versus putting up with the higher cost of drug A?
4. Bonus question, because why not: On the other hand, if the study actually costs that much to the manufacturer, how long would it take the makers of drug B to actually make a profit (in a competitive market, presumably they could charge a price nearer to drug A to recover this cost)? If the manufacturers of drug B wanted a positive ROI in 5 years, now how much would they be willing to spend on the trial?

### Closing words

I think the take-home statement here is that it pays (literally) to know what you don’t know. It seems feasible to use statistical modelling to quantitatively describe how much money certain types of information is worth. This in turn may help decision makers determine rigorously when it is right to delay decisions and allocate resources towards decreasing uncertainties, and when it is right to be decisive. Conversely, if you neglect to take this approach, the universe will impose a tax on your ignorance — an “uncertainty tax”. By attaching a raw dollar value to this, we might hope to persuade decision makers to invest resources into rigorous information gathering in appropriate situations, rather than relying on instinct alone.

Of course, to achieve that goal we’d need tools and techniques at the disposal of the decision maker, like the inequality above. I’m very interested in learning more about this topic, and what is out there in use, in the wild, today!