Doing a project without estimates

Thorbjørn Sigberg
6 min readJul 7, 2016

--

I seem to get tangled up in these #noestimates discussions a lot lately. I should probably present my views on the matter in a post, but let’s start with another story (referring to my recent post “Building a product without estimates”).

Still in the world of the SaaS platform that we call Qondor, I mentioned in the previous post that they try to avoid deadlines. Well, this spring they had one. They had a few reasons why it would be good timing to build a Check-in module, basically providing customers with a solution for check-in / access control at their events either using QR code scanners or manually searching for the attendees. One of them was an upcoming event where one big customer needed this feature.

Beep!

They created a web based, platform independent solution supporting all major browsers and even ipads, lightning fast scan / search (no noticable lag even with 10,000 attendees) and full offline support. During the initial pilot they had a major power outage in the entire conference hall in the middle of the most busy check-in period. All laptops lost power and wifi for about 15 minutes. Check-in continued without a hitch, and one of the laptops was even restarted before regaining connectivity. No check-in data was lost.

The pilot mentioned above was a live event with 3,000+ visitors over 8 days, starting May 19th 2016 — and they counted on Qondor to provide a check-in solution. That’s a classic hard deadline. Surely they had to start estimating now?

This whole #noestimates business gets much easier if you have a mature team. A temporary, hired team made up of a number of individuals would struggle somewhat doing what I’m about to describe. Incidentally, they would probably do a shit job at estimating the project as well.

This whole #noestimates business get much easier if you have a mature team.

Promising to deliver

In February 2016 the Qondor team promised the customer that the check-in module would be feature complete on May 1st, and available in the live SaaS environment in time for the May 19th event. This was eventually going to be a general addition to the platform and not customer specific development. Thus, they did not have a detailed specification or any written agreement with the customer on exactly what would be delivered. They did however discuss a set of functional features required by this particular event.

The team agreed to deliver the general capability of checking in attendees using QR code scanners connected to a computer. The platform had to work offline, as the network connectivity at the conference center was known to be poor. They would also need basic help desk features to be able to assist people on-site, i.e adding new attendees to the system and print new badges with QR codes. The event would hire unskilled labor (students) to man the door, so the system had to be easy to understand and use — and they would need to be able to add project specific users that didn’t count against their SaaS licenses. A number of concrete reporting capabilities were also promised.

Estimating

How long would all this take? How could they know that they would deliver in time without any estimates? To play this game, having a mature team have a couple of advantages. One of them is that they have an in-depth understanding of their own capability. In this example, they also had an existing technical understanding of the platform where the feature would be built. Finally, they could largely control their own time and workload — reducing the need for precision. The conversation went something like this:

Product owner: “So, can we solve check-in for that event in mid-May, adding check-in capability to our SaaS system in the process?”

Team: “We suggest to treat the event as a pilot and add a feature toggle to hide this from other customers for now, but sure.”

Product owner: “Would it take the entire team?”

Team: “Probably not. Maybe half of the guys for a couple of months?”

That’s 3 minutes worth of “estimating”, and they’re now pretty confident that they have the capability to deliver. They also know that as long as they don’t commit to any other deadlines in the same period, additional resources can be added to the project if they run into problems. Committing to this project may delay the delivery of other features, but since these other features don’t have a fixed deadline, that’s acceptable. It will also reduce their capability to handle rush jobs or production bugs, but since they’re not committing the entire team, that should work out okay too.

How long would all this take? How could they know that they would deliver in time without any estimates?

Also notice the development approach taken. This is a SaaS platform with many customers, remember? So every feature built should be planned in detail, making sure to solve a defined set of requirements from key customers and then be made available for everyone, right? Well, that’s sort of true.

The minimum viable product

While we agree on the goal, perhaps we don’t agree on how to get there. Instead of spending a lot of time planning and collecting detailed requirements, the team treated the entire project as an experiment. They had a good understanding of the requirements for this particular pilot event, so they decided to build exactly that. Truth to be told, they didn’t even build all of that. As they didn’t know if the reporting needs were generic, the reports weren’t built into the platform, but solved with Bime. A couple of the nice-to-have requirements weren’t known to be generic either, so they were only solved indirectly. Even though they knew they’d need manual search for attendees eventually, this particular pilot only needed QR Code scanning, so they only built that. And so on. Finally, all check-in related functionality were feature toggled and hidden for all customers but the one who hosted this particular event. This allowed for a controlled environment to test the new feature, in production.

Instead of spending a lot of time planning and collecting detailed requirements, the team treated the entire project as an experiment.

So what was gained with this approach? Instead of requirements made up by customers based on what they thought they needed, they now have actual experience from an actual event. They spent an absolute minimum amount of effort to get that, and they now have a lot more information about what is required (and more importantly, what is not) for a later, generally available version of the check-in module.

Measuring progress

As you may have gathered, they were able to build it in time. But how could they be sure? By dividing the work in full stack, independent features and stories of roughly the same size, they could easily measure actual progress. The team were actually almost two weeks late to the “feature complete” date of May 1st, and this was apparent quite early. However, since continuous delivery to test proved that they were consistently delivering working features required by the event, they chose to not add additional developers. The daily release to test also allowed the product owner to evaluate “good enough” for every individual feature, making sure they didn’t spend more time than necessary. Completed chunks of work was acceptance tested during the project instead of at the end. Even though the final bits weren’t released to production until May 12th, since they already knew everything was working, that was fine.

The team were actually almost two weeks late to the “feature complete” date of May 1st, and this was apparent quite early.

Come May 19th, everything worked perfectly. Even during peak time just before the event started, a queue never formed. People just held up their badge and were beeped as they went by, not even slowing down.

Crazy cool, or just crazy?

I’m sure this way of handling a fixed deadline project sounds crazy reckless to some of you, but it’s also crazy fast. A traditional, waterfall based approach to this project would have spent a lot more time planning, added a lot more features to the original plan, and spent a lot more money. The team solved exactly what needed to be solved, and learnt a lot in the process. The up front “estimate” took a few minutes, but I’m not sure it was less accurate than the alternative. As they didn’t have a detailed plan, they could also adjust the “how” as they went along. Finally, by measuring actual working features delivered, they had a measure of true progress to control whether they were on track.

Crazy cool, or just crazy? I’ll leave that up to you to decide.

@TSigberg

--

--

Thorbjørn Sigberg
Thorbjørn Sigberg

Written by Thorbjørn Sigberg

Lean-Agile coach — Process junkie, passion for product- and change management.