Kaizen: Experiments in Scaling Agile
(This post was written in collaboration with William Kammersell and simultaneously published on the Agile Central blog. You can also listen to a conversation with us on the Product Popcorn podcast — Episode 18)
Agile Teams must adapt to thrive through retrospective and process experiments, but what about organizations? How can they experiment as the cost of changes exponentially scales with the number of those affected?
In my org. we practice kaizen, or continuous process improvement, as we build a product called Agile Central (formally Rally), so we decided to apply it to answer these questions. Below we are going to share our failures, successes, and learnings, so you are inspired to experiment too!
Why experiment with Teams of Teams?
On our Release Train we have seven front-end development teams spread across three locations in the United States, all collaborating in a monolithic codebase. We have a strong base of agile practices and ceremonies that are widely adopted and help ensure we build the right product the right way. Teams commit to a set of features each quarter and execute on them consistently.
Despite these practices, we found ourselves faced with several challenges. Specifically, we have had pain around how we executed our product vision of revamping key aspects of Agile Central, like building new backlog pages and boards, crucial tools for any agile team.
Issues we faced included:
- Marketable releases were disjointed with no clear theme
- Teams felt disconnected between their work and customer value
- Long time from idea to market and corresponding low morale
- Lacked of progress on larger product vision
As these issues came up in retrospectives, we decided to run an experiment to hopefully solve them. Our hypothesis was that if we organized teams cohesively around related groups of features, or “initiatives,” then we would see improvement in the above areas. We would also see an increase in value delivered, due to shorter time to market, and increased work through the system. At the start of 2016 we began this experiment, and we called these initiative teams of teams “swarms”.
What are swarms?
Two to five teams worked together on a common initiative, acting both as individual teams, and collaborating together as a larger swarm team. Each individual team had a Product Owner, Scrum Master, several Developers, and a Tester. Each swarm had a Product Manager, Architect, UX designer, UX researcher, and Agile Coach. Swarms were given the autonomy to organize ceremonies and collaborate in ways that worked best for them. Some had stand-ups with the entire initiative team twice a week, some moved their desks so all teams were sitting near one another, some had video hangouts open to easily hear and converse across locations, and some even combined planning meetings and other agile ceremonies.
We came together as a release train at our communal planning events, then met once a week to make adjustments to the plan together and stay connected. Thus we had opportunities to continue to collaborate as a Release Train as well.
To measure the success of our hypothesis, we tracked throughput and other data, and surveyed our teams to get their Net Promoter Score and other feedback on swarms. Whether it would succeed or fail, we would learn a lot!
The Survey was sent
We worked in swarms for three quarters. During that time we found our feature delivery rate remained constant by count and by estimate. We surveyed our release train using the below survey, to see how they felt about swarms.
Our NPS of -85% was horrible! We could see that swarms were hurting our agile culture and we saw that employees disagreed with many of our hypothesis statements.
What we heard was that focusing on related features didn’t feel like focus at all. It led us to larger implementation plans that required extensive collaboration and cumbersome refactoring. To keep that many teams working on one initiative, implementation grew to as many as five related features in progress at once. That’s a lot of work in progress! Teams were frustrated as they stepped on each other toes and were forced to delete code or spend hours refactoring. Refractors broke work of multiple teams at a time, creating ill will and stress that was detrimental to our culture. It felt like teams were moving ahead of customer feedback, which led to questions of how the work directly correlated with delivering customer value. Teams started to ask “what is our focus?” “Where are we going?”
On the positive side, chaos led to new collaboration occurring between roles. With work moving so fast, the entire swarm had to learn to communicate better and more frequently. Swarm meetings evolved and joint stand-ups emerged. We started to see more developers discussing how they approached the code, defining best practices together, reviewing code across teams to share knowledge, and catching issues earlier.
The general consensus was that teams felt we did have better focus on delivering customer value, but that we lacked a clearer plan to deliver that value. We struggled with collaboration across so many teams. Teams didn’t want to lose the sense of camaraderie gained by working closely with those they would not normally work with, but wanted smaller swarms with the autonomy to prioritize work and define success. Ultimately the size and prioritization of the work left us with much to be desired.
So We did What Agilists do Best, and We Pivoted
Given the resulting challenges and feedback, we made several optimizations to our setup to help us plan for faster, more focused delivery, while maintaining the positive improvements to collaboration and best practices:
- Source swarms from teams in the same office
- Organize work to limit the number of teams in the same section of code
- Ensure all work is delivering value as quickly as possible
- Feed focus with smaller slices of work that can be experimented on and released faster
- Shortened our Team planning horizon from 3 mo. to 6 wks.
Results Round II:
After a quarter of working with these adjustments, we ran another survey. NPS is up to -8%! And we saw improvement in every category.
Our NPS is still negative, so we have plenty of room for improvement. Most noticeably we scored low regarding our prioritization decisions, and will look to improve there next. In the coming quarter, we are exploring changes in communication and earlier collaboration with developer teams when determining priorities.
We continue to make new happy mistakes every day, and feel overall we are seeing continued improvements in morale and value delivered.
Now it’s time to run your own experiment and let us know how you Kaizen!
Comment here or tweet us @LieschenGQ @howtotrainapm @ca_Agile
If you are interested in how our swarms self-organized, or our planning and other ceremonies, stay tuned for our follow-up posts!