Why Product managers should advocate the creation of Continuous Deployment pipeline

“Continuous Deployment” is a buzzword these days and not just technical decision makers, solution architects, infrastructure managers are trying to get up to speed on what`s going on in this exciting area, but also the product and business people. Yep…product people as well, moreover, if they want their product to be successful, Continuous Deployment pipeline should be one of the driving engines behind the process, not the product requirements per se.

Requirements are indeed valuable when teams execute a well known or understood phase of an initiative, and can leverage well understood practices to achieve the outcome. However, when you are in an exploratory, complex and uncertain phase you need hypotheses. And only getting back to your users as frequently as possible with high-quality product even in the early development phases will help prove or discard the hypotheses and progressively create a product people just love to use. The idea of challenging the requirements and trying not to fall into the waterfall trap is widely propagated, however unless these words are backed up and reinforced by the automated tools of delivery pipeline, human factor would most likely play a negative role.

Continuous Deployment should not just have a meaning of some toolset required to make the DevOps team busy, it is rather a concept which helps the entire team to switch the processes of development in order to have the target audience always in focus.

In my 5+ career in new product development, starting from marketing research, through QA, to Requirements Management, I had a chance to work twice where old-school approaches were in place (almost waterfall however for some reason “agile” was used as code name). In both cases the product managers and the team were encountering the typical challenges and repeating the old mistakes:

  • Inefficient grooming and planning sessions. The understanding of user stories deliverables for Product owner, developer and the QA engineer was in most cases different. Sometimes QA engineer was not even diving deep into the requirements during grooming/planning and eventually was testing “whatever is delivered”, instead of “what is expected to get delivered.” Importance of the breakdown of the user stories into testable deliverables was often not maintained — as a result, it was a usual deal that developer or even worse — a couple of developers were working on the same feature for one week without enabling preliminary exploratory testing for QA, product manager.
  • Development of the entire feature in one branch. Sometimes the development of one feature is happening in different branches and they are merged in a separate feature branch at some point or in worst case scenario — into the develop branch. Develop branch serves as junk branch in some of the cases.
  • The above conditions lead to the problems of getting the branch merged. Just imagine, after one week of development, developers might spend one extra day on dealing with the merge hell.
  • Time-consuming regression testing. Once the one-week old feature is deployed, testing is taking place. Since it is the first peak into the new feature, there might be a huge amount of bugs, inconsistencies and further changes required. In addition, there are always problems with testing on different environments and databases.
  • Once the feature is finally developed, merged and tested, and is available for product owner`s acceptance testing, it might come up that it does not meet the product owner`s expectations.
  • We always had a couple of fat blockers reported each week by our users despite heavy regression testing before each release.

To tell the truth, starting in the above described environment can be demotivating, especially in the conditions when you and your team work as team augmentation and the client team is sabotaging any breaking positive changes. However the development team could win a few key advocates on the client side, and started pushing incremental improvements. By changing the process flow and tools in 6–8 months we could already see positive results of the efforts:

  • We deployed as many time as needed — up to 20 times per day.
  • We improved the automation across the entire project — integration tests, unit tests, deployment scripts, etc.
  • Grooming sessions are very time-consuming, however it is worth having them. Large features are being split into smaller test-able, merge-able and deploy-able increments/tasks. The idea was to avoid the dependencies between the tasks and also the features in general. It is not just a huge exercise for the developers but also for the QA Engineers — they have to change their mindset from testing the entire feature to being capable of planning the work and needed tools within the smaller increments.
  • Planning meetings became dynamic and productive — we took the prioritized items from the backlog taking into the account of the estimates and the historical velocity of the team. The feature has been assigned to one-two people where one of the developers has been a lead. Although we wanted to make sure that all team members have enough knowledge about the particular piece of code/feature, so he/she can do the code review properly or jump into the development of the feature at any point. Moreover, the team members have been encouraged to develop their skills deep and wide — meaning that FE developers were getting BE tasks in a while and BE guys were heavily involved into FE specifics as well.
  • Development of short-lived feature branches which could be deployable to the virtual machine at any point.
  • Testing in feature branch based on the latest updates in the release branch. Each increment/task is being tested. Moreover, the deployment of feature branch was easy — this way product owner could quickly deploy and give the feedback — especially useful when the initial requirements were not clear in the beginning and it was decided that the requirements would be refined once the feature prototype is in place.
  • Large Features are hidden behind the feature toggles — partially enable the toggles and eventually make the features available to the limited number of target audience for the purpose of gathering the feedback. Techniques such as A/B testing enable us to take a hypothesis-driven approach whereby we can test ideas with users before building out whole features — extremely useful when the feature is being in active phase of development. Early feedback might result into new set of product requirements, updated designs or even dropping the functionality.
  • Unit tests — coverage of BE and FE separately.
  • Code review — not just formal one. The reviewer was expected to dive deep into the code base and make honest review.
  • Automated test suits were maintained on daily basis.
  • Merge tested and reviewed feature branch into release. We skipped the develop branch — it was dropped at some point and we never missed it.
  • The code in the release branch is ALWAYS high-quality code ready for deployment at any point.
  • Reduce the technical debt and go away from the monolithic approach of building the platform while refactoring the legacy code into the standalone micro services. This exercise was taking place as a part of new feature development while driving visible value to the business.
  • Last but not least — we had happier team. Team members quit working on weekend or late hours, and was instead very productive within the working hours.

Comparing “Life Before” and “Life After”, we can see that Continuous Deployment is not just about applying the tools, it is more about defining the product development problems and changing the mindset to solving them. It is also about giving the priority to the user and his needs instead of serving as poor-quality feature factory.