Henry Kim
1 min readMar 27, 2017

--

Excellent post. One thing that I always pondered about is the observation by Kahneman and Tversky that humans are bad at probability — that we systematically overestimate the odds of rare events under all manner of circumstances — is actually a feature and not a bug, so to speak. Most routine and commonplace events are not that interesting and offer little to learn from — even if they make up most of the “data.” One unintentional lesson that we have been getting from ML type algorithms that churn routine data into “insights” is that “routine” is normal and even “morally superior,” if only in the sense that they are predictable. In a sense, a backlash against the old, “human” folly of overestimating the rare events and taking them to be the norm, only to be surprised that they are far too uncommon. Nowadays, we seem to be in too much awe of the machine insights from the routine (i.e. predictable subset of) data, and foo foo the human ability to spot rare but significant patterns. A useful combination of the human and machine abilities will be using machines to work the routine, while relying on the human insights for the oddities, with enough perspective on both to gauge their respective limits. (i.e. there will be enough errors by machines that are not really “errors,” but potential sources for insights, if examined carefully, whereas humanwill need to recognize that their insights are far less frequently applicable than they should like to think.)

--

--