An experience with Contract-Driven Development

In which a humble team splits a new feature in a couple of subsystems to be developed independently and in parallel, increasing the team’s efficiency and reducing waste.

A bit of background

(Time is gold, you can safely jump to next section)

We in Agroptima are divided in two teams: backend and frontend. In the past, probably under the influence of the multi-disciplinary “craftsmen" philosophy, we have felt a bit ashamed of being divided by technologies. We have shyly tried other ways of organizing teams, but in the end we have accepted that the traditional front/back division works great for us.

As many other companies, when we start a feature, we backenders and frontenders talk to get to an agreement for the API. That lets us develop the feature in parallel later on.

Recently, however, we received the mission to develop a-few-weeks full backend feature. Our farmers need to generate an excel report and deliver it to the Ministry of Agriculture. No API involved and almost no UI, just a simple form with a bunch of optional filters, a lot of queries to the database, a bit of computation and an Excel document as the final result. The problem, it had to be done for a market show that was going to happen in a few weeks. We couldn’t afford delaying the release, so we wanted all of our hands working on that, and it required a lot of coordination.

We have realized that one of our weaknesses is the way in which we split our user stories. We are lucky to have a very curated process of software development that includes review and acceptance phases, but some of the stories take several hours or even a few days to pass through the Kanban board. Since we tend to divide the stories in a way that some of them need to be developed sequentially, we have had occasional blocks in which a developer couldn’t deliver more value until some previous work got merged.

In order to mitigate that problem we decided to apply the contract-driven development not only for backend+frontend features but also for pure backend ones as the commented above.

How did we do (and are still doing)?

1. We divided the new feature in two subsystems (provider/consumer)

We aren’t sending rockets to the moon, so working an architecture for a feature such as this wasn’t a big deal. We quickly spotted two subsystems: one to extract data from the database and perform some computations, and one to represent those results in an Excel file.

Architecture of the new feature

We did not arrange formal meetings to discuss about the architecture, a short informal conversation via Slack was enough. We could have divided the system in more parts, but it would probably have resulted into over-engineering. Two pieces were complex enough for the size of our assignment.

2. We defined a contract between the two subsystems

The Excel Writer was supposed to receive a document from the Report Generator, so if we wanted to develop them separately we needed to define the structure of that document. That was our contract.

Our peer Sandra volunteered to define the structure of the document. We were all sure that she was going to do a great job (you can’t imagine how efficient she is), so we didn’t arrange long technical discussions and meetings to define the perfect document. We just trusted her, we let her do the job and we accepted the contract she brought: we were going to use a simple python dictionary.

We could all have fought for modeling the document in a hundred different ways, but Sandra’s one was fine. Of course there were a few changes later on when we got our hands dirty and had more information; some nesting here, some changes on data types there, but in the end the contract worked great.

3. We broke each subsystem in tasks that could (kind of) be developed independently

Did I tell you we aren’t good at doing that? Well, this feature was not an exception. Most of the first issues were sequential, some of the lasts too.

Again we didn’t spend much time in planning sprints. A couple of us, me included, volunteered to review the issues, rewrite them, or split them the way we better found. And again the rest of the team was okay with the results.

Although we didn’t a great job here, whenever one of the subsystems got temporarily blocked by review/QA phases we had the other to jump in and continue working. The investment was giving us the fruits.

4. We are feeding the consumer with hardcoded fake input

At the current state of things, there is a file in the code with a huge hardcoded python dictionary representing the data to be written. Our document. This dictionary replaces the real data generated by the provider and is directly given to the excel writer. So no matter the user and no matter the selected filters, the output is always the same.

The writer is fed with fake data

Are you kidding? No, no, please let’s keep reading.

5. We are programming, reviewing and testing each subsystem independently and in parallel

Each subsystem produces its own output. When the “Create report” button is clicked in the UI, two files are created and linked:

  • One is created by the Report Generator. It’s the python dictionary encoded in raw JSON. It has the real expected data from the db and related calculations.
  • The other is a fixed XLS file.
The system produces two documents with different purposes

Every new piece we add to any subsystem involves new data dumped to one of those files. If the change affects the JSON file, then our brilliant QA team -- whose name is Núria -- checks that all the calculations are made right and the rest of the data matches the expected. Instead, if the change is related to the Excel, what QA explores is the format, the structure of the rows, the style and the internationalization (es/ca).

6. Once ready, we’ll remove the hardcoded parts and we’ll connect both subsystems

We are not far from having the feature finished, odds are we’ll celebrate in the next two weeks. Once the last issue is done, we’ll remove that ugly hardcoded part and we’ll feed the Excel Writer with the real generated data. We’ll not create the JSON file anymore, since there’ll be no need to do so.

We’ll perform a last, broad QA session to check that all is wired properly. Then we’ll remove the feature flag and we’ll ship. What will happen next?

Stay tunned, I’ll write a short follow up with the final part of the story.

Final thoughts

I couldn’t resist the temptation to write a few final thoughts. Please take them as my own.

I’m really proud of my team in Agroptima. Being small, with nine team members in the technical dept (including front, back, design, QA and the CTO), I feel we are going far in perfectioning our skills. There are lots of things to be improved, but as months pass we work more and more smoothly, as an oiled machine. The process flows and we feel proud.

If there’s a reason for that atmosphere, I’d say it is because there’s no hero among us. No software design or architecture guru. We treat each other as equals, we are building a culture of respect, honesty and trust. A culture that removes the need for dealing with politics, with hostility and bullshit. A culture of autonomy and mutual support.

Any dev team is able to do great things and to work with a great process. Knowledge is a necessary asset, but if there is a key factor for success (or for self-realization at least) it is a culture that lets people go free and straight to the point.