In Defense of Ignorant Prediction: My Review of Tom Nichols’ ‘The Death of Expertise’

David Eil
Extra Newsfeed
Published in
11 min readApr 12, 2017

--

Tom Nichols’ new book The Death of Expertise presents a defense (not impassioned, of course, but dispassionate, as an expert should be) of expertise. The book defines expertise, suggests how to identify it, chronicles the public’s declining deference towards it, and explores the causes of that decline. The book is needed and good. Tl;dr: If you’re the kind of person that only reads the first paragraph of articles, this book is definitely for you. Please click on the link above and buy it.

Now that it’s just us folks who read past the first paragraph, I have some more critical thoughts on the last two chapters of the book, which discuss the use of expertise for prediction and policy-making.

Nichols claims that “the purpose of science is to explain, rather than predict.” My own field is economics, the most dubious of all sciences. But at least to economists, this claim is bizarre. Or perhaps a better way to say it is that there is no separating explaining and predicting: if your explanation of a phenomenon is correct, then you must be able to predict something.

Explanation seems like innocuous, “here’s what happened” yeoman’s work. But it’s not — just as a novelist has an infinite number of details in her imagination and must choose the right subset to seed the reader’s mind, the scientist must identify which facts are the important ones in explaining whatever has occurred. Sometimes this is called “giving context.” This task is often difficult. Failures can be comical, like the (incorrect) observation that hospitals must be bad for your health because so many people die there. It’s a correct fact that people are disproportionately likely to die in hospitals. The explanation that going to a hospital is bad for your health is incorrect.

One way to judge the correctness of explanations is by predicting “out of sample events” — that is, events besides the ones you used to formulate your explanation. For instance, to test the explanation that going to a hospital is bad for your health, we might take 200 people, send 100 of them to the hospital, let the other 100 go about their business, and see if the hospital-bound 100 end up in poorer health. That is, we take a prediction based upon our explanation, and see how well it works. If the prediction turns out wrong, it means our explanation of the observed phenomenon is probably wrong.

The economist Milton Friedman, in his classic essay “The Methodology of Positive Economics,” stated an extreme version of this link between explanation and prediction, which is that not only is prediction a useful way to evaluate explanations, it’s the only way to evaluate explanations:

The ultimate goal of a positive science is the development of a “theory” or “hypothesis” that yields valid and meaningful (i.e., not truistic) predictions about phenomena not yet observed…theory is to be judged by its predictive power for the class of phenomena which it is intended to “explain.”…The difficulty in the social sciences of getting new evidence for this class of phenomena and of judging its conformity with the implications of the hypothesis makes it tempting to suppose that other, more readily available, evidence is equally relevant to the validity of the hypothesis — to suppose that hypotheses have not only “implications” but also “assumptions” and that the conformity of these “assumptions” to “reality” is a test of the validity of the hypothesis different from or additional to the test by implications, this widely held view is fundamentally wrong and productive of much mischief.

I think most economists today reject this view — a clearly ridiculous explanation that gets the most predictions right has probably only gotten lucky. If there are other available explanations that seem to make more sense and do pretty well on predictions, those are probably better. But still, predictions are an important way to evaluate explanations. If an explanation yields terrible predictions, it’s probably not a very good explanation.

Nichols seems to basically concede this later in the chapter, in two ways. First, by endorsing replication as a test of scientific knowledge: “The gold standard of any scientific study is whether it can be replicated or at least reconstructed.” A study that successfully replicates is an extreme version of an explanation that successfully predicts. No replication is exact. For one thing, it’s conducted at a different time. It’s probably also at a different place, with a different investigator, different subjects (if a social science experiment), different weather, and so on. The original study’s explanation of the facts has implicitly claimed that these differences do not matter. That explanation predicts that if the salient circumstances described as the experimental methods are recreated, then the results will be the same (with some sampling error).

When a study fails to replicate, the original authors may claim that some part of their original experiment that they didn’t fully describe — something special about their subjects, or the time, etc — was important for producing their results, and that’s why their prediction that the study would replicate ended up being wrong. That is, they expanded their explanation to include things that are more specific to their particular version of the experiment. This reveals two things: 1) their original explanation, which did not describe the importance of these peculiar circumstances, was wrong. 2) their new explanation is highly specific to the specific circumstances of their original experiment, and therefore makes no predictions outside of those circumstances. It’s an explanation that doesn’t make predictions. You might ask if such a highly specific explanation that doesn’t make predictions has much scientific value. And now you understand why science is about predicting as well as explaining.

The second way Nichols admits that prediction is a part of science is that after conceding that:

The question is not whether experts should engage in prediction. They will….Rather, the issue is when and how experts should make predictions, and what to do about it when they’re wrong

and after saying that “failed predictions do not mean very much in terms of judging expertise,” he finally admits that “calling experts to account for making worse predictions than other experts is a different matter.” That is, predictions are a fair way to judge between experts. He cautions however:

…to phrase questions as raw yes-or-no predictions, and then to note that laypeople can be right as often as experts, is fundamentally to misunderstand the role of expertise itself….The goal of expert advice and prediction is not to win a coin toss, it is to help guide decisions about possible futures. To ask in 1980 whether the Soviet Union would fall before the year 2000 is a yes-or-no question. To ask during the previous decades how best to bring about a peaceful Soviet collapse and to alter the probability of that event (and to lessen the chances of others) is a different matter entirely.

I think Nichols gets this “yes-or-no” question wrong, and in a significant way. It’s true that experts often get asked to make these “yes-or-no” predictions, and that these aren’t very useful. But experts are still quite useful on these questions — you just have to ask them how likely yes is.

Nichols uses as an example of failed yes-or-no prediction polling Nate Silver’s dismissal of Donald Trump’s chances in 2016. Nichols notes that Silver’s prediction that Trump would lose the Republican nomination was wrong, and that Silver then admitted that the assumptions underlying his prediction were wrong. But his error, according to Silver, was relying too much on conventional wisdom (or, as the title of the linked article describes it “acting like a pundit”) and too little on what his model was telling him.

Silver fixed that problem in the general election. You might be surprised to hear that, since Silver’s 538 site predicted that Hillary Clinton would win, just like everyone else. But — and this is the important distinction — Silver gave Trump a better chance than almost anyone else. While some models gave Trump as little as a one percent chance on Election Day, Silver gave Trump a 29 percent chance. That’s real expertise.

Is it valuable expertise though? Does it matter whether Trump has a one percent chance or a 29 percent chance, if the answer is still that Clinton will probably win? Yes, it absolutely matters. Say you can buy an asset that pays $1,000 if Clinton wins but loses $10,000 if Trump wins. Whether Trump has a one percent chance of a 29 percent chance affects whether this investment is a good or a bad one. Or say you’re Barack Obama and you’re sitting on explosive evidence that the Trump campaign may have collaborated with the Russian government. If Trump has only a one percent chance, even if you don’t release this evidence, then you might want to keep it secret, since releasing the evidence would create partisan discord and controversy for no real gain. On the other hand, if Trump has a 29 percent chance, releasing the evidence may be worth it to avoid the fairly high chance of the catastrophic circumstance we all now find ourselves in.

Or take an anecdote that Nichols relates which is meant to reveal the silliness of yes-or-no prediction:

There’s an old joke about a British civil servant who retired after a long career in the Foreign Office spanning most of the twentieth century. “Every morning,” the experienced diplomatic hand said, “I went to the Prime Minister and assured him there would be no world war today. And I am pleased to note that in a career of 40 years, I was only wrong twice.” Judged purely on the number of hits and misses, the old man had a pretty good record.

This, again, misunderstands prediction and prediction evaluation. A yes-or-no answer every day isn’t very valuable (anyone can realize that every day the chances favor “no”). But providing the risk of world war every day would be both quite difficult and quite valuable. The Prime Minister would surely like to know whether the odds of world war that day are one in a hundred or one in ten thousand. The way to evaluate these predictions is to ask whether world war is in fact more likely to occur on the days when the expert predicts a higher chance of war. (Silver goes into this in depth in his book.)

Finally, what about the remedy Nichols suggests, that the Soviet expert be asked “how best to bring about a peaceful Soviet collapse?” This question also involves prediction. The expert must predict that certain actions lead to a peaceful collapse (or at least a higher likelihood) and other actions no.

The reason I go on and on about this is that uncertainty is an important part of prediction — the expert has to be able to say how ignorant their prediction is. Oftentimes a prediction that’s correct on average — but overconfident — is worse than a prediction that’s incorrect on average. For instance, suppose that if the United States invaded and occupied Syria, the true probability of a successful campaign resulting in a stable regime were 55%, and the probability of failure — either outright defeat or a failed state following Assad’s ouster — were 45%. And suppose that in this case the United States would prefer not to invade. Which is more useful, an expert who predicts a 99% chance of success or an expert who predicts a 40% chance of success? It seems like the first one — she’s right than success is more likely. But really, it’s the second one, because her prediction leads the United States to make the correct decision of not invading. An important part of each prediction is how certain the expert is in their prediction. Overconfident experts are dangerous, sometimes more dangerous than wrong experts.

Nichols understands this on some level. Much of the book is dedicated to describing the dangers of lay people not appreciating their own ignorance. And he laments his own failed prediction — in his own field, as an expert — that Vladimir Putin would further democracy in Russia: “I rendered a definite opinion rather than taking the more patient, but less interesting, view that it was too early to tell.” But this dichotomy between “definitive opinion” and “no opinion” is a false one. “Too early to tell” is not just a demurral (i.e., “I’m going to wait to make a prediction”) — “it’s too early to tell” is its own prediction! It is a prediction of the probabilistic type I describe above, that there’s some chance Putin will promote democracy in Russia, some chance he won’t. This is, in fact, valuable information. The United States might want to pursue one policy against a Putin that has a 50% chance of governing democratically, and a different one against a Putin with a 75% chance of the same.

This knowing ignorance can even qualify as high-level expertise. For instance, one of the most famous hypotheses in finance is the “Efficient Market Hypothesis,” which says that it’s impossible to predict stock returns using publicly available knowledge. This level of ignorance actually wins Nobel Prizes. It won’t get you rich quick, but it’ll at least save you the fees and hassle of investing in a high-cost mutual fund or trading actively yourself.

There’s another reason that this kind of uncertainty is important to acknowledge. Not only could uncertainty change the optimal policy in the short term, it may be even more important in the long term because the level of uncertainty determines the value of experimentation. For instance, suppose that US policymakers aren’t sure whether allowing charter schools improves public education or not. Then they should realize that there’s value in allowing some states to experiment with charter schools, even if they expect those experiments to fail — in failure, policymakers learn something. And it’s possible they’ll succeed, in which case policymakers will have learned something quite valuable. However if policymakers only hear from experts that “charters don’t work,” then they may not experiment with them at all, not realizing that the expert opinion is rather uncertain.

So what about fields, like macroeconomics, whose predictions can’t even beat very simple prediction rules like “this year’s growth will be the same as last year’s growth”? Do macroeconomists have no expertise? I would say three things. One, as I said above, as long as that ignorance is acknowledged, it can be useful. If you know that macroeconomists are bad at making predictions, don’t give any macroeconomic advice— expert or otherwise — too much credit. Just because the experts aren’t good at predictions doesn’t mean lay people are any better. So expect uncertainty. Second, macroeconomists may still be able to tell you which policies don’t work very well. For instance, a macroeconomist will tell you that the gold standard is a terrible idea. Since a gold standard is so far out of the range of current economic policies, this knowledge isn’t so useful in predicting economic outcomes in the near future. But it is useful in convincing policy makers to avoid a bad decision. Thirdly, yes, macroeconomists don’t really have very much expertise. They’re very smart people, dedicated scholars, and know a bunch of fancy math tricks and lots of facts about the economy. But they do not understand it the way, say, chemists understand chemical reactions. On balance, I’m still glad that professional macroeconomists have more influence over economic policy than random people off the street. But I would rather do a Trading Places-esque substitution for Fed Chair than for my heart surgeon (although doctors too are probably overconfident). It’s not that macroeconomic expertise has died. It hasn’t been born yet.

Of course, that’s no excuse for spreading marcoeconomic nonsense like the alternative fact that unemployment increased under Obama, even though manipulated government stats don’t show it. These sorts of falsehoods and conspiracy theories are the focus of first few chapters of Nichols’ book, and our society must somehow figure out an antidote to them. I doubt we’ll have any success in this endeavor, but I’m not very sure. Then again I’m no expert.

--

--