Models aren’t always accurate. But Nate Silver’s was a pretty good fit.
Subramanian Prasanna Kumar

When you speak about data and models it’s important to be really precise with your terms.

It’s hard for me to understand what you mean by “models aren’t always accurate.” It’s true, as the statistician George Box one quipped, “all models are wrong, some are useful.” And we know why that’s true: models are simplifications of reality. They’re inherently wrong. But that’s more philosophical than anything else.

On the other hand, accuracy is a measure of how close measured and/or predicted values are to standard/naive/known values. In the most recent iteration of Nate Silver’s algorithm in 2016, his predictions were frequently inaccurate compared to who was predicted to win against what happened (Bernie Sanders winning in Michigan being the clearest case of this) and compared to his successful predictions in previous years. Either way, no serious analyst should embrace the idea that “models aren’t always accurate.” The question of how accurate a model must be is a cost-benefit function of how often it’s wrong and the impact of when it’s wrong.

Second, how are you defining “a pretty good fit?” Fit is defined by how well a curve or system explains the interaction between model inputs and outputs (in other words, it helps us understand what we’re gaining and giving up by simplifying reality into a mathematical expression). Nate Silver’s election prediction algorithm used a monte-carlo simulation to predict who would win the election. I don’t recall 538 every providing details about how well the algorithm performed in predicting known results, so I’m curious as to what values you looked at that would help you conclude the fit was pretty good.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.