How product engineers develop a feature at Inato.

Julien Fouilhé
inato
Published in
6 min readFeb 22, 2019
Photo by Glenn Carstens-Peters on Unsplash

At Inato, standards are a big thing. A lot of them were borrowed from large companies (such as Spotify’s Engineering Culture, which has become a reference for many companies on how to scale your team efficiently) and were adapted to fit our own company culture, size, and market.

Whether it is about sprint planning, developing a feature or fixing a bug, a checklist with control points is never far to help you confirm everything is under control and that you didn’t forget any important step.

When we develop a feature, we have the following process:

  • Challenge and estimate feature.
  • Analysis
    * Write steps for demonstrating the feature works properly.
    * Break the feature into development steps.
  • Development
    *
    Set a feature flag for your feature.
    * Code your feature.
  • Demo
    *
    Deploy your feature to the staging environment.
    * Demo your feature to the product team.
    * Deploy to production.
  • Report
    * List improvement areas.
    * Notify other teams the feature is now available in production.

Challenge & size

At Inato, sizing a task is a job we take seriously because it is often where we will notice problems and take steps to resolve them with the product team before the task is even in our roadmap.

We size tasks before they are added to a sprint using t-shirt sizes, from size XS for tasks that will take less than 30 minutes to complete to size L for tasks that will take 1–3 days to complete. Anything larger than that must be split into smaller tasks.

Here is how an engineer at Inato would size a task:

0.5 days < size M < 1 day

During that step, if someone in the tech team feels like the specifications are not clear enough for a feature, they can request details and changes.

Analysis

Once the feature has been added to the current sprint and a developer has assigned a feature to themselves, they can start the Analysis phase.

The following steps help us code efficiently by removing a certain amount of back and forth. We try to think about everything before rather than during.

Writing Demo steps

This is similar in spirit to test-driven development and writing the test first. This should help understand the feature and identify possible roadblocks.

  • Start by thinking about how you will demo the feature to the product team.
  • Write a step-by-step guide (the “validation steps”).
Example of demo steps for a random feature

Development breakdown

During this step, developers will list chunks of work as small as they can imagine. This is similar to the sizing process, and sometimes developers will just copy/paste what was done during sizing. If we want to measure the lead time for this feature, we will also note later on the time it took to develop a specific step.

We take a good look at the current code and list what needs to be modified for our feature to work. We also share this to the rest of the team so that they can potentially bring insight and remarks.

After this, we know what needs to be done in detail and we just have to follow the bullet points. This is priceless.

Example of a development breakdown for a random feature

Development

Set a feature flag for your feature

Now it’s time to get into the code. We usually hide features behind a “feature flag” that can be enabled/disabled on a specific environment. This allows us to ensure features are fully ready before releasing them into production, without worrying about dependencies or someone deploying our feature too early.

Code

Now is the moment to code the feature following the development breakdown made earlier.

For some features, we will pair-program with another developer to speed up development. Once per week, we will mob program on the same feature: the whole team takes turns in front of a computer and we share best practices and tips to make better code together.

We follow test-driven development best practices and especially enforce these during mob and pair programming.

Our testing strategy strives for this 3-level pyramid:

  • A lot of unit tests. (tests all the use cases of a specific part of the code)
  • Some integration tests. (tests that one use case works)
  • Few end-to-end tests. (tests that basic user behavior is fully functional)

Code review

Once a logic step of the feature development has been finished, developers will create a pull request and ask for the team to review their code. The job of the other members of the team is to be very demanding on this. Developers can choose to follow up on their colleagues’ remarks (they usually do) or decide it is out of scope.
It is important not to wait for the end of the feature development before submitting a pull request for code review because the bigger they are, the less motivated other developers will be to review it. So we try to divide features into as many pull requests as we can. Since the feature is hidden behind a feature flag, we can merge those pull requests before the feature is finished.

Continuous Integration

When submitting a Pull Request, our Continuous Integration will run our tests suites, as well as ensure our code coverage doesn’t go down too much. We do not aim for 100% coverage, but we like to improve it at every occasion. In less than two months, we went from 58% to 65%.

Demo

When developers have completed their task, they can deploy their feature to our internal platform where the product team will validate that the feature obeys their specifications and works as expected. If nothing is missing, then the feature goes to production immediately.

If it is missing something, the developer has failed the demo and must write down the reasons. Was it because of a silly mistake (they happen), or was it part of a much larger problem such as CSS attributes that have to be “guessed” and aren’t easily accessible, which would mean the tool we use for that purpose is not a perfect fit for us, etc… If we fail multiple demonstrations, we will take steps to ensure it doesn’t happen anymore.

Failure is not something we are ashamed of, we understand it’s normal and that they will happen. But we also understand that failure comes from factors that can be either minimized or even completely erased. If we are so strict about tracking it, it’s only because we strive for a completely linear and easy development.

Report

When developers work, they will have problems unrelated to their code but that bugged them. For instance, since we use a domain-driven approach, specifications can sometimes use a different language than the ubiquitous one, or the developer experience can be missing something. Or some test can fail randomly… All of those problems are listed so that we can learn how frequently they occur and how much they bother developers. We can then take steps to eliminate these problems, hence improving our developer experience.

It’s also important to notify other teams that a feature has been completed. That way, nobody misses out on something that has been done. Plus, everyone is happy to witness how fast and well the product is evolving.

Conclusion

Having standards for recurring tasks helps you focus on doing your job right because you don’t have to figure out what you’re supposed to do. It is the foundation for continuous improvement.

Drug discovery is a challenging, intellectually complex, and rewarding endeavor: we help develop effective and safe cures to diseases affecting millions of people. If you’re looking to have a massive impact, join us! https://inato.com/careers/

--

--