This is the second post in a series targeted at helping Product Managers understand the importance of Continuous Delivery.
The first post in this series explored why Continuous Delivery is critical to making a great product that users love and that helps achieve your business objectives. Continuous Delivery enables faster feedback cycles, providing more opportunities to learn, iterate, and ultimately succeed. This post takes a deep dive into the world of Continuous Delivery by following along on the fantastic voyage of a single change as it travels through a Continuous Delivery process.
I find it’s often helpful to take a deep dive into a real example to truly understand something. Therefore much like the intrepid adventurers in the movie Fantastic Voyage, we’re going to learn about Continuous Delivery from the inside out — following a single change as it travels through the Continuous Delivery process for the product I work on — CA Agile Central.
Each change to CA Agile Central is continuously delivered when it’s ready, about 20 times per day across 16 teams. The most common type of change is a front end (aka user interface) only change often to introduce a new feature, improve a feature, or fix a bug. Some changes also include changes to our backend services and APIs.
In this post, we’re going to dive into the details of a recent change to CA Agile Central — adding Work In Progress (WIP) Limits to Agile Central’s new Team Board. It’s important for teams to be able to define their own process that works best for them to accomplish their goals. Team Board is a new experience for teams to customize their process and easily iterate on improving it. Team Board was released in early 2017, providing the ability to quickly set up a visual board to track work, visualizing a team’s work and process. The strategy for Team Board was to release a minimal board and iterate with additional features based on user feedback.
For today’s journey, I’m joined by William, a Product Owner at CA Agile Central who worked with the team on Team Board.
It All Starts With An Opportunity
Adam: Hi William! Tell us about how the idea for WIP Limits and Card Age got started.
William: Sure. After initially shipping Team Board in early 2017, we worked with users to understand the next most valuable problem to solve was providing guardrails to help them limit their work in progress to help improve the quality and cycle time of their work — common Kanban concepts. The idea was originally added to our Agile Central subscription — yes we use our own product — as user story S129934 as part of the overall Team Board investment F14293. We added initial details and acceptance criteria at this time.
Making A Plan and Making A Branch
Adam: So when did the work get started on the WIP Limits functionality?
William: Our team prioritized this idea in June 2017 and pulled it into progress on a Monday afternoon. Robert, the developer who pulled this card, created a Branch for this change and started development. A Branch is a way to keep related code changes together without impacting what’s running in production. He periodically Committed changes to his Branch locally on his laptop to capture progress, conducting some manual testing and running some of the automated tests locally. Robert also added a few new automated tests for the new functionality. All of our code has automated tests that can be run locally on a laptop or remotely on shared testing infrastructure. Early on Tuesday, Robert pushed his changes to GitHub and created a Pull Request. GitHub is where we store our code and also collaborate on changes. We’ll talk about GitHub and Pull Requests a bit later. For now, think of Pull Requests as a way for a team to collaborate on code changes, providing feedback to improve the code and fix bugs.
Adam: So what happens now that the changes are in GitHub and there’s a Pull Request?
William: This is where the journey really starts to get exciting! When a Pull Request has been created, each Commit thereafter automatically triggers automated tests which provide feedback to the developer. In this case, a few different types of tests were kicked off. These tests run on shared infrastructure so that tests can be run much more quickly versus running on locally on a laptop.
Some examples of automated tests are:
- Lint Tests to ensure the code is syntactically correct and appropriate patterns were used to keep code consistent and understandable.
- i18n Tests to ensure code supports internationalization in multiple languages.
- Unit Tests to ensure very small pieces of the code are working. An example is testing to make sure that when the number of cards in progress is > a column’s WIP Limit, the overWipLimit function = True.
- Integration Tests to ensure larger pieces of code work together such as on function calling another function.
- End-to-End Tests spin up a full version of Agile Central and uses a web browser to test the user’s experience. An example is testing dragging another card into a column and verifying the column turns red to indicate it’s over it’s WIP Limit.
Where possible, these different types of tests run in parallels and the tests within each type also run in parallel. This decreases the cycle time of running the tests and providing feedback to the developer. For this chase, about 12,000 automated tests ran, which takes about 35 minutes.
Adam: Wow, that’s a lot of tests! Do they ever all pass?
William: Yes, for this change all the tests passed. But our team’s tester, Brianna,found two usability issues. Brianna used TestN to conduct manual exploratory testing. TestN allows anyone to spin up an on-demand CA Agile Central test environment, with the ability to specify which services to start and which Branches (aka code) to use for each service. It takes about ~6 minutes to spin up a TestN and it automatically destroys itself after 8 hours so we don’t have long running test environments that get messy. It’s a very useful tool for all sorts of things — exploratory testing, creating help documentation and screenshots, making demo videos, etc.
Based on Brianna’s feedback, we made two changes to improve the usability of the WIP limit settings, making sure there was clear messaging if a user couldn’t save their settings because of an incorrectly formatted WIP Limit. This was a use case that we didn’t think of during our initial planning, so we’re glad we found it before we released the change.
Adam: It sounds like it was helpful to conduct the exploratory testing in addition to running the automated tests?
William: It sure was. Exploratory testing is a great way to bring the voice of the user into the testing process.
Pull Request and Code Review
Adam: So what’s next on our journey?
William: Well, there was some additional feedback from the Pull Request review and also the updates from the exploratory testing, so there was a bit more work to do. Back to coding and testing!
Adam: Wait, let’s visit the Pull Request. Tell me more.
William: Pull Requests are a GitHub feature that helps a team to conduct code reviews — that is, to review a change together, providing feedback to improve the code and fix bugs. A Pull Request provides a summary of the code changes, including the Commits, test results, and provides tools for team members to provide comments and have discussions about sections of the code. We also have Pull Request Templates for certain changes that help us to remember our working agreements. Our Templates often include a checklist to make sure we’ve followed our working agreements.
Adam: Interesting. What’s an example of a working agreement?
William: For this change, the template included “what does the change do”, “where should the reviewer start” and “ticket(s) for this change” questions. Answering these questions helps reviewers understand why the change is being made so it’s easier to review the change and provide feedback. Oh, and my favorite question — “how did this change make you feel (in GIF format)”. The answer to this question must be in the form of an animated GIF. It’s a fun way to gain empathy how it felt to work on this change. It’s a good conversation starter.
Adam: That sounds fun. What are some of the best GIFs you’ve seen?
William: Some recent ones:
Adam: Thanks for explaining Pull Requests. I like the GIF idea! Ok back to the change, so there were a few changes that needed to be made because of the exploratory testing and code review. What’s next?
William: Those changes follow a similar process — code, test the change locally, and pushed the changes to GitHub. However, now that there’s a Pull Request, each Commit triggers the full suite of automated tests. The tests results are easily visible on the Pull Request via GitHub Status Checks, which can be required to pass before a Pull Request is merged and released.
Releasing The Change
Adam: Great! So is that it? Time to release?
William: Almost. We also wanted to conduct another round of manual exploratory testing as a quick double check. After the team’s tester and/or Product Owner have reviewed the changes in TestN, it’s time to release the change.
Adam: Woohoo! Let’s do it!
William: The developer merges the Pull Request into the master Branch, which triggers the automated release pipeline. This pipeline runs the tests that previously ran and also releases the changes to production. Depending on the type of change, the change is gradually rolled out to users. In the case of a front end change, we maintain multiple version of the front end code in production, allowing users to update when their web browser checks to see if there’s a newer version. In the case of services, our production platform automatically starts rolling out the updated version of the service, making sure each new instance is healthy before moving on to the next. For most services, there are at least a few instances, so the release is automatically rolled back if the first few instances encounter issues.
Adam: Wow, no downtime?
William: No downtime. It took us a long time to get to the point where we could release without any downtime. Years ago, we released every 6 weeks, then every week, then every day. Now on demand — continuously. We’ve gotten better over time.
Measuring the Impact
Adam: Awesome. Well, thanks for sharing!
William: Wait, there’s one more thing. We need to validate the change to determine if it had the impact we were expecting. In the case of WIP Limits, we wanted to make sure users were finding the functionality and were successfully enabling it and continuing to use it. We have a few different ways to get feedback, including:
User Metrics like which users visited which pages, what did they click on, etc.
User Feedback which is user submitted feedback. We can filter it by user type, page, etc.
Span Data which helps us understand response times and performance.
In the case of WIP Limits, we were able to measure how many users started using the functionality and also get feedback from the user submitted feedback. While we learned about a few other opportunities to improve WIP Limits in the future, users love it so far so we’re leaving it as is and moving on to other opportunities.
High Fives and Victory Beers
Adam: Thanks for sharing William. It was really helpful to dive into the details of a single change to really understand what Continuous Delivery might look like for a product.
William: Happy to share. As a Product Owner, I love the ability to quickly make changes and measure the impact with real users. It lets us quickly test our hypothesis, learn, and iterate to be more successful. One thing I’ve learned along the way is that with the increased speed of Continuous Delivery, I really need to be available to my team to help them make quick decisions along the way. Continuous Delivery is a fast-paced game.
Adam: Indeed. The next post will explore practices that help support continuous delivery, such as a Product Owner being available to team members, pair programming, etc.
William: Looking forward to it!