Useful and Useless Predictions

WanderData
BuzzRobot
Published in
4 min readNov 19, 2017

Recently, I wrote about the new-ish Google Flights Predictor, which was picked up by Becoming Human. For reference, here’s the specific prediction:

“Likely”

After thinking about the prediction some more, I’m impressed that Google doesn’t add some meaningless likelihood to the fare increase — like, there’s a 73% chance of an increase. This, in turn, made me think about fantasy football and (unfortunately) politics:

On October 23 of last year, all of the players in my fantasy football matchup had played their hearts out. I had an usually good week from Matt Forte (100 rushing yards, two touchdowns!), and also the Philadelphia defense. My opponent (I’ll call them the Grump’s) was substantially behind going into the big Sunday night matchup between Seattle and Arizona. In that game, the Grumps had Arizona kicker Chandler Catanzaro and running back David Johnson, and also Seattle wide receiver Doug Baldwin, still to play. At the time, I led 118 to 71. Since Yahoo! predicted his points total to be 98, and I had already exceeded my projected total of 101, whatever algorithm they use predicted my win outcome to be moderately high — something like 64%.

Then, Seattle and Arizona settled in for the weird 6–6 OT tie game. Johnson slowly accumulated carries, while Catanzaro scored only one boring field goal in the second quarter. The teams were just beating the hell out of each other, and into the fourth quarter, the game tied 3–3, my Yahoo! odds of victory crept higher until in the closing minutes Yahoo! claimed a 100% probability of my impeding victory. Yes! I could surely sleep soundly. However, now intrigued by my 100% guaranteed Yahoo! win, I left it on (or maybe I was doing laundry).

The game went into overtime when Catanzaro’s chip-shot field goal bounced off the uprights, and my winning percentage dropped into the mid-60’s. Johnson kept racking up those carries (and points). Grumps’ point total crept higher, buoyed by a successful Catanzaro kick. However, as time wound down (again), my win-percentage grew higher, and I eked out a 118–113 victory.

The next day, FiveThiryEight estimated Hillary Clinton’s chances of winning the presidency as 86.2%. The New York Times estimated she had a 93% chance of winning. I looked at those predictions, and wondered (1) how much more robust those models were than Yahoo! fantasy football winning percentage, and (2) why we care, generally, about hyper-specific, official-looking, statistical predictions.

Lacking the energy (and ability?) to examine the former, I’ll discuss a few minutes discussing the latter. What is a useful prediction? Well, for me, it’s a (1) well-founded, (2) estimate of, (3) something that will happen, (4) important to me and (5) that affects me, (6) within a reasonable timeframe, (7) for me to make an informed decision based on that estimate.

For example, take a long-range weather forecast. Let’s say I’m going backpacking, and I want to know what to take. However, even if the forecast is favorable, I don’t just take a t-shirt — obviously, such forecasts are unreliable (often no more accurate than a coin flip), and I would still take raingear, a shelter, and other useful items.

In my simplistic forecast rubric, the long-range forecast sort-of satisfied all five elements, but in practicality ended up failing the first element — long range weather forecasts are not necessarily well-founded. The Yahoo! fantasy football bot fails most of the elements — in particular, neither of us could make player substitutions when the odds started bouncing around, meaning we could make zero decisions based on the estimate. I suppose the estimate still had entertainment value, and I do enjoy following the Yahoo! bot’s silly estimates.

Now, what was the daily — hourly, even — utility of the election forecast? In other words, however well-founded those estimates, what were the informed decisions I could make based on, for example, Trump having only a 12% chance of winning? Obviously, there were investment decisions — decisions, which, if you followed the prevailing wisdom at the time, would have been wrong. If you were a contrarian, and thought Trump would win, instead of the widely-predicted stock crash, selling prior to the election would have missed you a cross-sector price spike in nearly all indexes. If you thought Clinton had a 93% chance of winning, and did nothing, how would you have addressed the sudden rout on bonds?

Then what other personal actions? Obviously there were business decisions to be made, but as we know, successful businesses try to plan for both outcomes, and in fact reacted fairly quickly to the actual outcome. Leave the country if your candidate didn’t win? Plan your recount strategy? I’m talking about things that you could have done yourself armed with the knowledge that Clinton (apparently) had an up to 93% chance of winning the election. To me, there wasn’t a lot to do. Worse, those predictions may have affected the outcome of the election (by suppressing volunteering and/or turnout), and resulted in unnecessary surprise after the results were tallied. But, they did really help drive the news cycle, and were therefore good for the media outlets that created them.

My takeaway is to apply my little rubric a little more carefully to predictions that I consume, keeping an eye in particular on (7).

Back to the Google Flights prediction, the smart people at Google understand that all I really need to know is that’s more likely than not that fares to Kona will rise in a day (which is what occurred). This prediction satisfied all the elements, which is cool.

--

--