Fixed Requirements — An Oxymoron?

Leena
Continuous Delivery
5 min readMar 18, 2019

One issue with the software industry is the belief we know exactly what to build. It is wrong to assume that we’ve all these cool technologies with us and anything that we develop will be perfect.

I’ve reminded this many times to myself, the team including the non-software/engineering people I interact with: we will never know what the users want. Sometimes we have a feel or an intuition for it. And it can be right depending on your proximity to the users. And the way to confirm it is through conversations with the users and keeping a close watch on how they use our software.

A simple migration tool

A few years back, I was working with a customer to deliver a product to migrate documents from one system to another. Our customer has a workflow management system targeting specific domains. They needed a tool to migrate documents from different systems [like Sharepoint, Documentum etc.]. The migration makes their onboarding process much faster.

The tool description is simple — copy documents from one system to another. The document also has metadata such as who created it, multiple versions, when and who created those versions, etc. The tool needs to save these metadata too by mapping them appropriately.

We built a prototype to help the sales team to demo the idea to their customers. And it worked. The customers got the idea quickly enough.

However, to make the tool ready for deployment, we needed more testing and polish, such as:

  • Making sure the tool is intuitive enough to understand which documents had synced successfully and which ones had failed — along with the reasons for failures
  • Handling customisations of these document management systems in a smooth manner
  • Handling varying numbers of documents
  • Handling varying document sizes
  • Handling intermittent internet connectivity

We didn’t follow a good enough system to convert the prototype to a “useable” software. We also failed to convey to the stakeholders why the conversion involves effort. The rest of the post is a retrospective of the same and what could have done differently.

Three Cs

User stories, as per Extreme Programming, have three critical elements, called the three Cs:

  • Card — A 1 or 2 lines line “definition” of what needs to be implemented
  • Conversation — The conversation among the stakeholders and the team around what to build in a way that both parties share their opinions, thoughts and feelings. The stronger the collaboration and communication, the better the quality of the product.
  • Confirmation — Arrive at an agreement among the stakeholders and the team about what needs to build.

We didn’t follow a disciplined way for all the three Cs. Even though we had stories, i.e. the “card”, our conversations and confirmation were not good enough. That resulted in a less than optimal quality of the final product.

It happened because of various reasons such as the lack of spending enough time with the end users. The stakeholders followed different processes for building software. We weren’t persistent enough to influence them.

Domain Knowledge

Related to above, the conversations should help us in arriving at Ubiquitous Language and also for gaining domain knowledge because interactions with the expert users help to improve the understanding of the space.

Respecting the logs

As Dan North mentioned in his talk Ops and Operability, A log is an append-only, read-only, user interface. It should allow one to answer the below:

  • What happened?
  • Who is impacted?
  • How do we fix it?

We learned this the hard way :)

Exceptions need to be exceptions

Somewhat related to logging, we need to respect exceptions too. The exceptions should be handled whenever appropriate without losing any relevant information. Many know how much it pains when you have a block like this:

We didn’t have the above one. But there were cases which needed to be considered as exceptions but were not. E.g., handling non-2XX response code when making API calls.

What worked well

In spite of the above issues, certain things we did, worked in our favour.

TDD

We had enough coverage for the tool which helped us in multiple ways:

  • Better extensibility. We could add integrations with other document management systems without too much difficulty
  • We had to refactor the code numerous times. One time we had to move from one persistence layer to another. And we could do that because of the abstraction layer and decent test coverage.
  • Hardly any regression issues. We were confident when the tests passed.

Continuous Delivery

We never waited for fixed length iterations to deliver what we accomplished. It meant we were slicing features in small chunks.

We deployed the tool in-house. We were given a test instance, simulating the production setup. Continuously delivering to this environment helped actual users to test it against actual data.

Communication with actual users

Image source: https://www.templatemonster.com/blog/usability-testing-basics/

We have been conversing with the end users. We could see them using the product, struggling with it and watch their emotions. That is when we realised that we need to improve our domain knowledge. Ideally, we should have had two levels of communication:

  • One during the development to understand and confirm what should be built by communicating through quick prototypes for different scenarios
  • Then feedback session with the working software to confirm that what we envisioned with the prototype is “usable” enough

We took baby steps to implement the above changes and improve. We had our constraints which made it difficult to bring in the changes. These constraints were mainly due to the contracts and working agreement among different parties. That is a candidate for another post :)

Finally!!!

Over a period, we brought in some rhythm and discipline to improve the situation. Finally, we delivered it decently, and the users were happy. The users got the value. Even though it took a lot longer for us to reach that stage, we felt good that we delivered something that is useful.
Of course, we could have reached there a lot earlier, if we had the right focus and persistence.
The fundamental reason for failure was the lack of knowledge about users and to accept that the requirement evolves over a period. Rarely we build the same software multiple times. There will be similarities in, but it will have differences too.
And this is what differentiates software industry from any other engineering streams like construction. I recommend you to watch Deliberate Discovery by Dan North which explains how to design this discovery process.

It is the responsibility of every software professionals to understand the feelings of our users and everything that we do should be to optimise for delivering value for them. Any time the focus shifts to something else, the quality of the product dips.

Those who have watched how users use the software you built, agree that there is no better learning experience than the same. Others who haven’t experienced it, I highly recommend setting apart time, even though it can be embarrassing.

Originally published at www.multunus.com.

--

--

Leena
Continuous Delivery

Co-founder/CTO @ PracticeNow, Bangalore, India. A strong believer of lean principles, an evangelist and practitioner of Continuous delivery