Agile Maturity

Yevgeni Mumblat
Gett Tech
Published in
6 min readAug 16, 2023

A great methodology to analyze and grow agile teams

This blog post is a result of a cooperation with Lital Gur-Arie, as the work of a Project Management Officer (PMO) and a R&D manager at Gett is tightly coupled.
We do a lot together at Gett: discovery, planning, analysis, execution and release at the team level — all are based on cooperation within the triangle that includes Product Manager (PM), the PMO and the R&D manager who are closely coordinated with each other. The leadership team that consists of those three personas is also the one that holds the responsibility over the agile maturity of the team.
Lital now works for AppsFlyer, but this blog is based on the principles we shaped during our long-lasting cooperation at Gett.

As we all know, every team — especially a software engineering one — has its own internal dynamics, which evolve over time. A common diagram is used to describe this process –

Team evolution cycle

It is clear that our aspiration would be for the team to achieve the “Performing” phase, which means a period of time in which it is highly coordinated, acts synchronously and independently while successfully delivering qualitative products — in a predictable pace.

So far so good.
Many superlatives and clichés.
Let’s try to break it down to real life realities.

Most of the institutions in the tech industry have adopted some variations of elements from common agile methodologies — Scrum, Kanban, SAFE etc. — or combinations of those. Thus, it seems reasonable to address the aforementioned team evolution process through discussing teams’ agile maturity.

There are various models that measure agile maturity. At Gett, we have gathered the relevant information, and shaped our own adjusted model, that facilitates evaluation of a team’s maturity, triggers discussion and leads to actions, in order to continuously develop and grow our teams.
After a thorough analysis, we have chosen to focus on the following categories, which are typically used in the industry:
Value delivery, Predictability, Continuous Improvement, Winning teams and Quality.

It seems that achieving high performance at each of those, will actually mean that our team is mature, and actually is performing very well according to our expectations.

Our tool is a questionnaire that is divided into sections, corresponding to the categories mentioned above. The questions differ on the level of evaluation — some indicate lower maturity, some a higher one.
Let’s try to dive deeper into each of the categories, and elaborate on the different parameters we decided to measure for each of them:

Value delivery
Our purpose for measuring the perception of value delivery is obvious — to check what team members think of the team’s productivity and effectiveness in terms of being able to generate value.
Some questions that could be suggested for this section:

  • Does the backlog contain small, clear, estimated and testable user stories for the coming PI? (for at least 80% of the epics by the PI Planning)
  • Is the epic’s business value (problem, solution approach, etc.) well defined and articulated by the PM by the grooming meeting?
  • Is the team familiar with the business value of the epic they are working on?
  • Do the team members have cross functional/ component knowledge (to be able to code in other teams’ services)?

Predictability
One significant aspect of successful delivery of the team is the team’s stability — as expressed by the predictability of the delivery. Some potential questions to be asked in this context -

  • Are the teams stable? (same group of people working together over time)
  • Does the team have a stable velocity over time?
  • Is actual velocity used for sprint/ PI planning?
  • Does the team usually deliver what they committed to in PI planning ?
  • Is the team protected and not disrupted (or controlled by outsiders) during the sprint?
  • Are we tracking ‘PI Burndown’ (plan vs actual)? Are burndown charts being used by the teams in order to track sprint progress?

Continuous Improvement
We want our teams to perpetuate a continuous cycle of analysis and improvement. This category of questions measures the maturity of the team from that perspective.
Some examples for questions that can be asked -

  • Does a retrospective take place at the end of each Sprint/ PI? Does the whole team participate in the retro?
  • Does the retrospective translate into a concrete improvement proposal?
  • Are cycle times measured and actions taken to reduce them?
  • Is the team taking part in learning from the industry? / Does the team feel that they are learning from the industry?

Winning teams
There are some additional parameters, some “rules of thumb” we would like to apply to a team’s work, from our experience as an organization. Among those we can mention the following -

  • Is the size of the team between 7 to 9 people?
  • Do all team members take part in estimating the user stories?
  • Is the team familiar with the Sprint goals?
  • Is the PM available for the team?
  • Can each of the team members perform every task (knowledge matrix)?

Quality
Last but not least is quality. We would like to have our teams develop features of the highest quality. Thus, in this category we focus on quality related topics, to evaluate the team’s output from the quality perspective. We can ask questions such as -

  • Does the team meet the DoD for epics?
  • Do we have clear acceptance criteria for epics and stories?
  • Are we using continuous integration?
  • Is the testing and demo of the epic being done at the early stage in the sprint/ PI to get an early feedback?
  • Are we adding automation of new features to regression tests/ CI?

What do we do next?
Having those criteria defined, we are ready to proceed and measure what our teams think, and how they measure themselves.
Different functions on each team (e.g. team members, Product Managers, Team Lead) take the questionnaire. The information is processed, and then discussed by the team.
Through the process we would like to achieve the following goals:

  1. Raise perception gaps between different functions
  2. Raise the awareness of different processes and encourage discussion between team members.
  3. Use the responses as a baseline for a future evaluation

It is important to note, that it probably wouldn’t be effective to approach this evaluation process for a team in its Forming-Storming-Norming phases — they probably are not yet ready. We typically run this process with teams that are already stable and mature.
For each of the teams we conduct a survey, in which they answer the questions above.
We gather the answers, grade the responses, calculate the statistics, and then we can have the representation of the data in the graph, similar to the following:

Team’s results mapping

We can see, for instance, that the team above feels great about the quality of its deliverables, value delivery, and continuous improvement. On the other hand, team members are not satisfied with its predictability and being a winning team.
Another interesting piece of evidence that may be discussed is the perspectives on continuous improvement — seems that the TL and PM perceive that topic differently, compared to the team.

We recommend choosing no more than two categories to work on with a specific team, within a certain (not too short) period. A quarter for instance.
When the team discusses the results, it defines action items to improve the maturity in chosen categories. After the agreed upon period, we follow up, conduct the survey again and retro.

Improving an agile maturity of a team — is an iterative process, and we can always grow further!

--

--