Often this is followed up with an exposé about how they only reported x%, so that means that (1-x)% can also happen. It’s a perfect scenario; they can never be wrong! We should all be so lucky. Of course, this probabilistic argument may be valid, but it can cause so…
Poppycock! As Nate points out in the blog post you reference, he can validly be accused of being wrong if the actual result is not in the “cone of uncertainty”:
By Election Day, Clinton simply wasn’t all that much of a favorite; she had about a 70 percent chance of winning according to FiveThirtyEight’s forecast, as compared to 30 percent for Trump. Even a 2- or 3-point polling error in Trump’s favor — about as much as polls had missed on average, historically — would likely be enough to tip the Electoral College to him. While many things about the 2016 election were surprising, the fact that Trump narrowly won1 when polls had him narrowly trailing was an utterly routine and unremarkable occurrence. The outcome was well within the “cone of uncertainty,” so to speak.
And as you can see, the actual result was well within the sweet spot of the model:
If the actual results had been in the tails on either side then, it is clearly valid to say that Nate was wrong. So it is ridiculous to say that merely by reporting a probability of x% (where 0 ≤ x ≤ 100), that “they can never be wrong”. Nate’s model was right even though Trump won.