The Wisdom of Uncertainty
Buster Benson

This is probably over-simplifying. In machine learning there’s this idea of bias/variance trade-off. Bias and variance are two classes of errors that a machine learning model can make.

Variance errors are kind of like the errors Bot A are prone to — seeing significance in variety where there is no real pattern.

Bias errors are the errors Bot B makes—i.e. oversimplifying, and not capturing meaningful nuances in the data set.

Theoretically, you can work out an optimal trade-off for a given data set for accuracy. In reality, accuracy might not be a thing humans can (or should?) optimize for.

So. Hm?