Machine learning is better than stats in business

I’m working on a deep-dive of how we evaluate the performance of machine learning algorithms and how to compare performance of machine learning approaches to classical statistical approaches but before diving too deeply I want to establish a baseline for why we would even choose to use a machine learning approach over other approaches.

“Correlation is not causation!”

You have probably heard this little chestnut before. If not let me explain what this often used maxim is trying to remind us about analyzing data.

In an experimental setting we can often see treatments and results that are correlated, they happen together, but just seeing something happen together does not mean that one thing causes another.

A funny example of really odd correlation is it turns out that U.S. spending on space, science, and technology looks to “cause” increases in death by strangulation.

for more fun see: http://www.tylervigen.com/spurious-correlations

Even though this has promising explanatory prowess (e.g. R² > 0.99) it’s obviously bunk. Something else is at play. I wouldn’t submit a paper based on the above data but I might try selling coffins to untapped markets with it.

If we were running an experiment we would try and control for everything that could possible impact our experiment in hopes that at the end we would be able to say that our treatment (e.g. presence of gene A, drug dosage, etc.) was responsible for changes in our outcome. So that in the end we can publish a paper that says A causes B.

When A is a drug and B is “extends life for 100 years” that’s very exciting. But usually A doesn’t cause B and we have to try a new experiment.

But business is not often able to exact such tight controls on all the things that could impact an outcome. Further, more often than not we have zero control on most of what impacts an outcome. Rather, we’d like to find what settings or situations lead to certain outcomes.

In short, in business we aren’t running scientific experiments so correlations that work, at least for a while, are usually good enough.

When we do a regression we want to evaluate performance, essentially we want to be able to say that our model is sound-enough. And it matters that when we’re using a regression model to predict say lead quality (e.g. expected revenue, likelihood to convert, etc.) we aren’t running a tight experiment. If it turns out that our “treatment” (e.g. offer presented) is highly-correlated with other factors that we may not be able to collect (e.g. demographics, income level, etc.) it almost doesn’t matter because what we’re really looking for are reductions of the complexity of our decision space that help us make decision that make us money.

If we are publishing a paper unsound conjecture based purely on correlation means rejection. In business consistent trends that help us make decisions that positively impact our bottom line is called a win.

The problem with statistics

Statistics is a worthy and wonderful field of knowledge. A mentor in grad school would often remind me that Statistics is the protector of the scientific method and when done with integrity it truly is.

Statistics was born out of the need to make inference from very small amounts of data because when a data point is a rare form of cancer you can’t simple “go get more data”.

For all but a few use cases in a business statistics is only tangentially useful as a means of analyzing trends. The core value of statistics, identifying likely causation, is simply much more than is useful.

This doesn’t mean we shouldn’t use statistical methods for trend analysis but it does mean that we should be constantly on the lookout for methods better designed for our needs.

Machine Learning and Business

Machine Learning is a field of computer science that has borrowed methods from statistics and in some cases co-discovered methods for inferring outcomes similar to statistics. Statistics is the process whereby we establish confidently that A causes B. In machine learning, however, the goal is simply to identify B. If we can use A to do that great. If not we can try C. If A and C together identify B more reliably than either A or C alone we will use them. If not we’ll try something else.

When a machine correctly identifies a handwritten digit, that a sound uttered should be written as C-A-T, or finds a reliable trend in a cloud of data that “predicts” revenue there is no committee of reviewers standing over us to reject an inference because it is based on only correlation and not on a sound explanation of causation.

This may seem like an esoteric distinction but it’s very important. Most of us who use applied statistical measure in a business setting do so because we were taught regression analysis in our university studies either as part of a stats course, as part of a business course, or because we found out we could drop a regression line in Excel.

We may have developed an intuitive or even robust technical understanding for what a regression is doing when making sense of a cloud of data points but we are only trusting in the comfort of “explainability” as a defense of our actions when we rely on things like p-value, R², and other metrics to justify our use of A and not C when “predicting” B.

Determining causation is what p-values and other metrics are designed to help a researcher do but in business and machine learning causation is just too high a bar for what it can net us. Many of us have been able to profit greatly from identifying trends that don’t have an ounce of explainability when it comes to causation and there’s no shame in that.

So if we’re only concerned in accurately identifying profitable trends in data similar to how a good robot is one that can reliably avoid an obstacles in it’s path is there any need for the ceremony of metrics that we co-opt as a measure of trend accuracy when those metrics were intended to be a measure of explaining causation?

No.

Machine Learning is exactly what the name says. The ability of a machine to learn to do something without having to be programmed explicitly to do it. A robot has zero incentive to seek causation when correlation will suffice. In pursuit of the goal of accurately identifying things machine learning, as a discipline, has come to see that a few things matter more than others:

  1. Accuracy trumps Explainability
  2. Accuracy trumps Simplicity (Sorry Occam)
  3. Generalization trumps Explanation
  4. Data trumps Sophistication — machine learning methods may seem complicated but they are simple-minded brutes compared to the sophisticated elegance of the central limit theorem
  5. Application trumps Theory

While the above list should be enough to raise the hackles of any researcher they are almost a rallying cry for business.

Conclusion

In short machine learning methods are ideal for business because they are perfectly designed to meet our need: Accurately identify trends.

So we should be unsurprised when we find out that machine learning metrics explain little other than accuracy and what data is responsible for boosting accuracy.