The Wisdom of Uncertainty
Buster Benson
102

This is probably over-simplifying. In machine learning there’s this idea of bias/variance trade-off. Bias and variance are two classes of errors that a machine learning model can make.

Variance errors are kind of like the errors Bot A are prone to — seeing significance in variety where there is no real pattern.

Bias errors are the errors Bot B makes—i.e. oversimplifying, and not capturing meaningful nuances in the data set.

Theoretically, you can work out an optimal trade-off for a given data set for accuracy. In reality, accuracy might not be a thing humans can (or should?) optimize for.

So. Hm?

(See: http://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff)

A single golf clap? Or a long standing ovation?

By clapping more or less, you can signal to us which stories really stand out.