When we talk about agile transformation, we often say “we have the transformation plan”. It could be some pattern we want to implement. It could be some framework — like SAFe, LeSS or Nexus. It could be some concept or vision. It could be some goal and the way how to reach it.
This article describes the transformation that happened without a plan. We made the series of experiments, observed their results — and then we took next steps.
What was the project about?
The goal of the Secret Project was the implementation some product in the organization. The product was bought from an external company — but there was a lot of work with integrations and customizations of this product.
We had around 8 independent development teams that made changes in many different components.
The starting point
In the beginning, the teams tried to follow Scrum framework. Each team had their Product Owner and Scrum Master. Teams had all Scrum events. Teams worked in independent sprints, that were not synchronized — the sprints started and finished on different days. Every team had separate Sprint Review. After this Review, they moved their changes to common TEST environment. And there was an independent QA Team which tested all changes on TEST environment.
Here’s the schema of this situation. For simplification we didn’t draw DEV environments and there are only 3 development teams in the picture:
What was the problem with this setup? It was not easy to decide when the changes should be deployed to PROD environment. When teams constantly deliver changes to TEST environment, we need to say “Let’s stop delivering change for a while, we need to allow QA Team to make all needed tests and then we deploy changes to PROD”. In other words — we needed some “freeze period” to make all necessary testing.
But wait, we wanted to have Scrum! There are no “freeze periods” in Scrum, are there? So definitely we did something wrong.
It was a good time for the first experiment…
Let’s have one sprint, not many sprints
First experiment was about having one common sprint for all teams. We wanted to have a moment in time when all teams deliver their changes to TEST environment and we can test them together.
There were some troubles with introducing this synchronization (“Why do we adjuct to them? They should adjust to our schedule!”) — but we made it. And our process started to look like this:
After each sprint, all teams delivered their changes to TEST. We had few days to test all the changes together. And then there was deployment to PROD environment — marked in the picture by red arrow.
What was good? We were able to deliver changes to PROD after every sprint.
But we noticed that separate Sprint Reviews had many drawbacks. There were stakeholders who wanted to join several reviews — so reviews needed to be scheduled one after another. Most of team members joined only their review, so teams worked in some kind of isolation. They were not aware of changes in other teams.
What if we have one common Sprint Review instead of many separate reviews?
Let’s have common Sprint Review!
We didn’t know if it was a good idea, but we wanted to try.
We created the list of stakeholders. We invited them all to a common meeting. We prepared the schedule with 20 minutes slot for every team. We chose the event facilitator. And we did one, common Sprint Review for all teams:
There were many doubts about this idea — but the general impression after the first common Sprint Review was very good. It was much easier to have overall view on delivered changes and future plan of development.
But it the meantime another problem had came into light. It occurred that teams are quite often disrupted by bugs found during testing the changes:
What were the consequences of this situation? Plenty of them.
- Teams needed to split their focus between development from current sprint and fixing problems from previous sprint
- Predictability of the sprint planning was decreased (“we planned to deliver features A, B and C — but we must fix bugs instead…”)
- In case of not delivering fixes in time, PROD deployment was delayed
- There were some symptoms of conflict between members of Scrum Teams and QA Team (“they always deliver buggy features” vs “they always register too many bugs”)
It was time for the most complex experiment in our story…
Let’s include acceptance testing into the sprint
In short, this change consisted of three elements:
- We decided to split QA Team and move QA person to each and every team
- Teams were encouraged to deliver change to TEST environment during the sprint (the sooner, the better)
- We changed Definition of Done from “Backlog Item is finished when it is ready to install on TEST environment” to “Backlog Item is finished when it is installed and tested on TEST environment”
We got a lot of pushback toward this change. There were many talks, discussions and arguments: “it will never fly”, “our performance will drop”, “it’s not possible to deliver and test valuable items during such a short period”.
But we managed to convince the organization to make this experiment. And our picture started to look differently:
“Testing” is marked as a separate box in the picture above — but in fact it was normal activity that happened inside the sprint.
After first sprint in new setup, we needed to make some corrections in the package before PROD deployment. But in the second sprint we managed to reach our goal — version presented on Sprint Review was exact the same version that was deployed to PROD environment after the review. Additionally, teams completed more Backlog Items comparing to previous sprints!
All good? Not yet. We observed that some teams deliver mainly changes in services. This observation was the background for the next experiment.
Let’s treat some teams as service providers
Synchronization between many teams is the cost. The more independence the team has, the wider area of self-organization it can have. So we wanted to check the following hypothesis: “if a team delivers some services, it can be treated as an independent service provider”. So our picture was changed once again:
Independency of the team requires good border definition. We needed to define clear responsibility for delivering and testing the changes.
However, there are clear benefits of the change. Team A could use separate environments, sprint cycle and tools. They didn’t have to participate in common plannings and reviews. They can set up their working environment in their own way.
Hey, was it really so easy?
Here’s our evolution in set of pictures:
Was it really so linear and easy? Of course not!
All these changes were messy. There were many different opinions, concerns, points of view, fears, judgments. Changes didn’t happen in linear way — they overlapped each other. Result of one change had impact on other experiments. Sometimes results of the change were not clear. Sometimes one person said “it’s much better now”, while another person said “it’s harder to work now”.
So how could we say that these experiments brought good results?
We had the goal for all these changes. Our goal was to deliver new version of the product to our customers after every sprint. How successful were we? You can see it in picture below. Green “OK” card means that changes from sprint were deployed to PROD on planned date. Red cards mean that deployment was either delayed or cancelled.
After series of changes we reached the situation when we delivered changes to PROD without significant problems and delays. So we really started to deliver releasable increments after every sprint.
Were all experiments successful?
Of course not! Not all our changes and experiments were successful. We still make some experiments with common planning for all teams — we are not satisfied with the results. We look for a better process and tool to maintain the roadmap and for strategic planning. We make some changes in team structure.
Irrespective of their results, all experiments provided us with some knowledge. And I believe this is the most important outcome of the experiments. You can spend long hours and days on discussing pros and cons of different framework and different techniques. Or you can make some change and observe its results.
You can build scaled Scrum setup through responding to the change instead of following the plan.