A Forecaster’s Dark Secret

The Value Of A Better, Or Just New, Perspective

Decision-First AI
Course Studies
Published in
4 min readMar 12, 2019

--

Yesterday’s article about the World of Dance should have been one of those light-hearted, bridge-building, more pop than science articles. It was… until the feedback and questions started rolling in. A few centered around a very interesting question.

“You spent years forecasting, how often have you built prediction models in business that actually forecast something other than the client realized?”

For those who didn’t read yesterday’s article, this was inspired by my statement that rather than forecasting the contestant’s scores (based on their performance — the basis of the score), I was forecasting the judge’s scores (based on their feedback and anchoring bias). Had I ever done that in the business world?

“Yes. Not Often. But almost always the most valuable, accurate, and actionable forecasts I created.”

Let me give an example.

In the credit world, every company builds a roll rate forecast. Credit portfolio’s segment their delinquent accounts into “buckets”. The percentage of dollars or accounts that are not collected during a cycle and subsequently “age” into the next “bucket” is known as a roll rate. Collection’s teams and CCO love these reports. They also love to base their forecasts on them.

My early career was built on a simple realization. These things are nonsense (that will offend a LOT of people). Let me explain. I noticed that if you calculated the cumulative roll rate, accounts or dollars rolling all the way from good to charged off (lost) — there was little volatility that could not be explained by other things like seasonality and risk criteria when the customer was acquired. This was true of credit cards, mortgages, and business loans. It was true at multiple institutions and on every sized loan I ever encountered.

It was caused by something very common in forecasting, human response and expectation. Because collection’s loved their roll rates — they acted to make them meaningless. In a collection shop, collectors are assigned to “buckets” but this is not a lifetime commitment. Collections management earns their pay moving collectors from one bucket to the next. If last cycle saw high roll rates in one bucket, the next cycle would see more collectors assigned to bring that rate down. Where would they be taken from? Buckets that were experiencing better than average rates. Everything quickly regressed to the mean.

Read more here.

It is more of a trend!

It shouldn’t take much perspective or imagination to see this happens all over the corporate world. Wall street speculators and investors spend endless cycles trying to combat this effect.

Ken Fisher, legendary investor and founder of Fisher Investment, bases his entire strategy off of this idea (and benchmarking… but that is another article). In other words, lots of people who make a habit of forecasting better than others do so specifically because they are looking at something different (or differently) than everyone else is.

Get it here.

Sports & Gambling Use It, Too

I learned the technique from Jai Alai. No, I never played. I have never seen more than a brief highlight of a match. But Steven Skiena wrote a great book about betting on it. It should now not surprise you that he had no interest in forecasting the winner of a match.

But sports also shows us that this secret can be short lived. SABRE-metrics were all the rage in the world of baseball… and consequently, don’t really seem to be anymore. While the Oakland A’s were able to capitalize briefly, once everyone starts paying attention, these models stop being predictive. It probably accelerated their decline that these metrics were just lesser known — they were not perspective altering.

Over the decades, I have built forecasts that focused on small subsets of a population — recognizing that 20% might drive 80% of the difference. I have created forecasts that predict a population’s blend or ratio of active, credit-savvy, or responsive customers, although that wasn’t what the client had asked for…

Complicated forecasts rarely maintain predictive power, but neither do simple models that the client understands. These models too readily influence the client to take actions that will limited, change, or even destroy the models ability to predict outcomes. Build your models with innovative insight then be sure to provide the reporting your client expects. This will give your model longer predictive power.

If you really think the insight you have is actionable and the company would benefit from understanding it — you have two choices. You can find an indirect (or tricky) way to make it happen (you won’t get credit). Or you can educate them on what the model actually does (your model will likely stop working). Either way, at least the numbers should improve.

Thanks for reading!

--

--

Decision-First AI
Course Studies

FKA Corsair's Publishing - Articles that engage, educate, and entertain through analogies, analytics, and … occasionally, pirates!