Our Machines Now Have Knowledge We’ll Never Understand
David Weinberger
1.3K99

Models are tools which allow us to see through our biases. Machine learning is not so much in the model, but properly weighting the variables.

Paul Graham showed us Bayesian filtering SPAM filters allows us to see the real words causing certain email messages are classified as SPAM. It’s often not the words we think.

The failure of models as in your example of the asthmatic pneumonia patients likely reveals that a knowledge filter exists between management and care givers. Likely no one went to the floor nurse and asked for inputs on sorting patients, they went to the head of nursing who likely hasn’t see a patient in years.

Psychologists tell us that we learn more about mental abilities by studying people who lack the specific ability. The reason is that we have a biased model of available human mental mental abilities based on our own perceived mental abilities.

Wanna see lack of problem solving skills, put some horses, sheep or cows, let’s say sheep in a pen with an open gate, throw some nice hay outside of the pen but opposite the gate. Unless the sheep can see the food through the open gate, they cannot work out how to access the feed, even though they were raised in the group of paddocks their entire lives. They are only able to choose the direct path. They have a bias to choosing only the visible path. We suffer the biases from time to time, our biases prevent us from seeing the correct path. These models reveal our biases.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.