Your arguments are weird and wrong.
H.M. Müller

H.M. Müller,

I appreciate your unique perspective on this issue, but I think you are refuting a straw man of at least what I intended to be my actual argument (and the brevity of my article is probably much more to blame for this than you are, so I apologize for that).

You seem to be suggesting that I am dismayed by the (apparent) non-determinism of our surroundings and that I would hold that the mere presence of chaotic elements in nature means that we should “give up” on the overall project of making educated guesses about our world (in other words, that we should give up on science!). This is not what I am arguing at all and in fact would go against my core beliefs as a scientist.

Philosophically, I find the argument that nothing is 100% knowable persuasive and even encouraging — it simply means, with a bit of hand-waving, that we have to use probability, which works as long as we can find statistically significant evidence for our hypotheses. If something might go wrong, as you rightly say, then that doesn’t mean we should panic and give up all hope — instead we should use the scientific method and try to identify the odds that something will go wrong, and determine upper, lower, and average case bounds for how much additional time might be spent if things do go wrong. Barring thorough statistical analysis, educated guesses can be a useful and effective way of keeping the lights on, and I am in no way trying to challenge that.

What I am arguing is that there is something specifically broken about current industry practices and attitudes concerning software development time estimation that isn’t broken in the types of situations you list, like planning a wedding, etc.. Developers and managers alike who I have encountered make wildly inaccurate guesses when it counts, but then turn a blind eye to this fact — despite their own failures to deliver on time, they arrogantly claim that the estimation project as a whole is a success, that we are doing it the right way, and that software time estimation is a closed issue, a done deal, and anyone who says otherwise is inexperienced or inept, even as projects drag on weeks, months, and even years longer than what was predicted in the original spec. This is self-delusion on a massive, unimaginable scale

I find this view, which today pervades the industry like a weed, to be both laughable and arrogant, as well as potentially alienating for developers who notice, as I have and the majority of people who responded to this article have, this gaping discrepancy between predicted and actual that our industry faces today. In my experience, including on projects that I am only tangentially involved in from a variety of companies and in a variety of languages, environments, and development styles, it is almost always the case that there is significant unplanned development time and that the overwhelming majority of this time comes from unpredictable “gotchas” like the ones I mention in the article, rather than from typically easy-to-reason about, known demons.

I don’t want people to stop making estimates — I just want the industry as a whole to admit how terrible current estimates are (in effect, to feel really bad, as I do, about the current state of affairs) instead of parading around claiming that software time estimation is a solved problem if you simply follow [insert dogmatic methodology here]. The truth is that in 2017, your choice of time management methodology isn’t going to have nearly as much of a noticeable impact on your bottom line that averting (or accurately accounting for) even a single “gotcha” would have over the course of a multi-month project. Our time would be much better spent, I think, identifying the classes and early warning signs of gotchas, and developing new theory and methods for accounting for these in our existing prediction-making machinery.

Regarding your comment about the connection to computability theory — I admit from the outset that this is a bit of a stretch, but at the same time I would be shocked to find out it is not the case that humanity’s apparent difficulty at estimating the time it takes to complete various development tasks is not rooted in some fundamental limitation of the very programs we are writing.

I will definitely give this more thought, and possibly write a follow-up if I think of anything substantial.

A single golf clap? Or a long standing ovation?

By clapping more or less, you can signal to us which stories really stand out.