A Guide to Better Dev Time Estimates

Coming from a more archaic form of engineering (mechanical), I’ve come to experience that in software engineering — there is much more likelihood for requirements (and the tools to deliver them) to change over time. This is especially true for Javascript projects and their dependency madness.

This volatility in software projects — ignoring the inherent nature of estimates — is perhaps the greatest contributor to the difficulty in making accurate software project estimates.

Estimates play a vital role in a business’ ability to make decisions regarding the building of software; without them businesses would have no idea of the costs and time involved in undertaking projects.

This isn’t at all helped by the fact this responsibility is usually given to those who are possibly the worst at it: Developers (see #NoEstimates). My guess is that this inaptitude is owed to at least five things:

  1. The fact that the estimates we give are tied to our previous experiences.
  2. People almost always take all the time allotted to a task.
  3. Placing the small tasks that add up out of our lines of vision.
  4. Ignoring communication overhead (the time spent communicating at the cost of productive work).
  5. Assuming estimates are fungible/interchangeable.

Bad estimates aren’t at all unique, in fact — this study reports that 66% of software enterprise projects have cost overruns. Albeit true that inaccurate estimates are unavoidable, there are some things one can do to minimise their margins of error:

Take guessing out of the game

Never give ballpark estimates. Failure to plan is planning to fail.

Giving ballpark estimates both forces you to give an estimate based off of your previous experiences and doesn’t protect you from a myopic view that judges the most seemingly tedious tasks (ignoring the seemingly small tasks that end up adding up.)

Ballpark estimates are usually rationalised by the fact that they are usually padded by time to account for unforeseen hurdles. There are a three big issues with this:

  1. The padding itself is also an under-researched estimate.
  2. Secondly, as a natural matter of human psychology — as described by Parkinson’s Law, people tend to take the full amount of time allocated to a task to complete it. This inevitably means that unforeseen hurdles are left to bleed out of the allocated time.
  3. Lastly, even if accurate estimates are made down the road, there still exists a risk for the new estimates to be huddled within the ballpark expectation.

Break the tasks down

The idea here is to break requirements down to the smallest manageable tasks possible as agile methodology would suggest, and then estimate for each process for each task (Yup, the big-O(N2)). Estimations for these tasks should take into consideration the processes they go through:

The key thought here is to break tasks down before giving an estimate.

Research

An estimate for an unfamiliar area should never be given. If part of the requirements require work to be done in an unfamiliar area, whether it’s a new architecture, framework or plugin — the requirement should be isolated and researched to uncover possible hurdles.

Development

This is where the bias in our decision-making threatens us the most. The idea of breaking tasks down in planning is pivotal to giving more accurate dev estimates.

Review & QA

It takes time for a feature to be approved and sent off to production. The estimate here depends on: the number of changes made to the feature, the simplicity of the feature (how much communication is needed for the reviewers to understand the feature) and the availability of the reviewers (how long it will take for review to occur).

Understand that estimates are non-fungible and temporary

This, put simply, means that one developer’s estimate cannot be used to predict how long it will take another developer to complete a task. Furthermore, in a space where there’s much room for the change of requirements; estimates have a short shelf-life.

Being perfectly technical; a change in a requirement should cause its respective estimate to perish, and a new one evaluated.

As Goodhart’s Law goes: “When a measure becomes a target, it ceases to be a good measure.” What Goodhart meant was that once a measure becomes something to be desired, people will start to game and modify it. Once estimates become targets, they are treated as fungible, long shelf-life measures.

Ironically enough, treating estimates as targets causes permeation through different stages of the production process, causing a snowball effect that ends in the multiplication of the initial estimate.

Use Factors of Safety

The intent of safety factors is not to add padding to an estimate, but to add tolerance to an already carefully made estimate with the goal of managing client/user expectations. This is in contrast to the expectation created from a ballpark estimate.

small manageable task should be completed in (X) time, but should be expected in (FoS * X) time at best.

I best understand the choice of factors of safety from an engineering perspective, where your target safety factor is the ratio of the point of failure to the allowed target.

For example, a seat that is known to allow a maximum load of 100kg in use, but designed to fail at a maximum load of 200kg has a safety factor of 2.

The key theory here is that nothing above 100kg should be allowed, but unforeseen circumstances (unreliability of the material, fatigue or bad conditions) should still not be enough cause the chair to fail at carrying 100kg.

Source: https://www.engineeringtoolbox.com/factors-safety-fos-d_1624.html

Where the above table takes material reliability, loads and environmental conditions into consideration, in estimating development times, it’s equally as important to consider the reliability of the technology, the complexity of the work and the conditions of the production process/workflow.

Summary

Unavoidably, the goal of making estimates is to later create targets. To increase the likelihood of staying well within the bounds of your estimate — have two measures:

Time to build feature

This is how long you estimate it will take you to build a feature. This is your allowed time.

  • Make sure the feature is at its smallest manageable size and cannot be reasonably broken down further.
  • Never ballpark this estimate. Estimate for each step in the feature development workflow and sum them to find your estimate.
  • If you’ve built a similar feature before, these estimates should be based off of your past experience, otherwise —you should speak to someone who has done similar work before making the estimate.

Target time to client/user

This is the estimate, that carries the risk of being considered a target, that should be given to the user/client.

  1. Decide a safety factor by considering the reliability of the technology, your familiarity with the technology, the complexity of the feature and the nature of the process/workflow the feature will go through.
  2. Multiply your allowed time by this safety factor to arrive at your estimate.

Safeguards

  1. In case of an estimation error you may discover down the line, make this known to the project managers well ahead of time.
  2. Requirements are likely to change as time goes on, which means that the integrity of estimates is bound to weaken; as in the first case, this should be communicated to PMs and clients well ahead of time. A decision should be made whether or not to keep the new changes in the current scope of work or moved to a later stage.

In summary, a good management of expectations comes from a unified understanding of the truth about estimates: that they are naturally inaccurate, non-interchangeable and temporary.

While the methods I’ve shared above are time-consuming, one can get quite good at doing this with enough practice and experience.

A good related read:
SCRUM: The Art of Doing Twice the Work in Half the Time