Implementing a maturity framework for our teams at Packlink

Asier Marqués
Packlink Tech
Published in
11 min readMar 15, 2022
Fruit that is not mature yet
Photo by Markus Spiske on Unsplash

During the last quarter, we rolled out a new tool to measure and track the maturity level of our engineering teams at Packlink Tech.

TL; DR.

This article aims to share our motivation, references, and vision behind the maturity model and how and why we implemented an extensible framework that serves as a base for it.

It starts with the background of our last years at Packlink and why having the capacity of observing our teams' maturity was a concern in some critical cultural processes like the onboarding of new engineers and strategic decision making.

We will see how we designed and implemented the Maturity Framework to obtain two different outcomes.

The first one is to set a shared understanding of what we mean by maturity by all the people in engineering and product; what concrete expectations we have to reach a good maturity level on all the teams.

The second one is ensuring everybody knows all the teams' maturity, not to make comparisons between them but to understand our context conditions better and use the Maturity Model as a guide to improve it. In addition, everybody can propose new expectations in different cultural categories that all the teams should consider.

Background.

Packlink Tech team structure has iterated drastically during the last three years. Applying transformations to an organizational structure implies changes in its teams.

We were in a fast scale-up mode. Our engineering teams suffered several transformations motivated mainly because of these factors.

  • Changes in the priorities when our structure was not mature enough.
  • Changes in the teams' missions or ownership scopes.
  • Headcount growing needs.

During those transitions, the growing needs and teams' ownership scope were evolving, so we grew and split the teams changing their composition in terms of roles and team members.

Before our last engineering organizational transformation, we followed different approaches to grow the teams and onboard new engineering teammates.

The grow and split pattern.

This pattern is aligned with the one described in Heidi's Helfand book Dynamic reteaming.

Before creating a new domain team, the team that assumed the ownership to be transferred is enlarged until the team composition constraints needed for creating that new team are achieved.

ref: Dynamic reteaming by Heidi Helfand

When the original team grows enough, we can create a new team.

In addition to following this pattern, we also designed processes aligned with our context needs for the onboarding experience.

The growing mode onboarding approach.

We adapted the Grow and Split Pattern to ensure a good experience related to onboarding and cultural aspects.

The person is not assigned to its final team as soon as it joins. We follow a 90 days plan based on onboarding in another team, and we consider the team and that person in "growing mode."

The growing mode has these expectations:

  • The person is in onboarding, so they should not be part of a compromise with a deadline.
  • The person can't be part of a team that needs a headcount.
  • The person shouldn't be on board in a team that has not enough maturity.

Thanks to the growing mode, we can ensure that the new people have the space to learn about our ways of working and culture without extra pressure.

Desksurfing onboarding approach.

Another onboarding process that we implemented was a desksurfing rotation for new people.

During their onboarding, the new workmates change their team and buddy (the person who supports a new workmate during their first days) weekly or every two weeks according to a desksurfing plan before starting in their destination team. This approach was also known as "Hacer un Erasmus" colloquially by Spanish mates in the engineering team.

We ensured that the new person knew other workmates in other teams during their onboarding with this approach. That made the communication easy when a collaboration was needed between the people destination team and other teams.

Motivation for a maturity model.

In the previous section, we used the maturity term in several places.

When we discuss the different factors to assign a person to a team or plan a new transformation, we take very seriously into account multiple factors such as cultural indicators, team metrics, and the context of the team.
The status of each team related to all those indicators is what we mean by maturity.

The problem was that we didn't have a shared understanding of the concrete meaning of that term and its implications. And we didn't have a way to measure the maturity to make it easy for the Engineering Managers and teams to design plans to improve it.

As part of a strategic plan, we needed to make visible the conditions of the teams at any moment, especially while we were implementing a transformation.

Also, we needed to ensure a shared understanding between all the engineering teams about what we mean by maturity regarding a team and make their evolution visible, measurable, and understandable.

Is it evil to score the maturity of a team?

As with any other metric related to engineering teams, like, for example, the Lead Time, the debate about if a metric is good or evil arises.

Managers can use any tool for the bad or good. For example, you can use these sorts of metrics to compare teams. And this is the worst idea ever, a fast recipe to grow a toxic culture.

This risk is especially true when you compare teams in terms of maturity. Every team has its context, complexity, and external variables that affect its maturity.

A low score in terms of maturity is not related to a performance result and should be used as a guide for teams when they iterate their processes.

As a manager, it is a powerful tool to observe the situation of your context and use it as additional input to make strategic and tactical decisions.

At Packlink Engineering, we have a data-informed approach, so we don't make conclusions based only on data inputs like this.

The framework.

In addition to a documented maturity model, we should design the framework to serve as a base for it.

The framework that we needed has some requisites:

  • It should be extensible: we need to add new measurable parameters related to our context. And we needed a straightforward process to add new parameters or deprecate the ones that would have no sense.
  • It had to be understandable by all the tech members: in our context, tech means product plus engineering; despite the parameters being related to engineering, they had to be understandable by both audiences.
  • It had to be simple: a too complex framework will make the maturity model also complex.
  • It should provide a metric system: the goal was to understand and measure our teams' maturity in a standard way.
    Because we follow a data-informed approach instead of a data-driven one, it hasn't to be too accurate but has to make sense.

The matrix of expectations.

The core of the framework is an expectation matrix.

This matrix is based on a table. Each row contains the definition and maturity levels for each of the expectation parameters.

The category, expectation, and weight columns.
Each expectation in the matrix is relative to a maturity category. We can have categories related to DevOps, QA, Product, Autonomy, Ownership, or any relevant aspect to our understanding of maturity that we consider to include.

The expectation is the concrete maturity parameter that we want to measure.

The maturity levels.
We have the following maturity levels:
Not Initiated < Initial < In Path < Mature.

Every level defines a concrete expectation in terms of maturity that the team can use to improve their processes or ways of working.

The status of the parameter itself.
Each parameter has an implementation status. Only the parameters with active status are taken into account and will be measured.

We can propose new parameters and discuss their possible levels and our concrete shared understanding before activating them.

Gathering the teams' scores.

At the beginning of every quarter, the engineering managers set the status of their teams related to each expectation parameter row.

This allows us to see the status and tendency of the teams' maturity related to all the categories.

The metric calculation.

We expect a percentage of maturity per defined category. And the metric is based on the level and the weight specified in the expectation matrix.

In this first iteration, each level column has a score value. That value goes from 0 for the "Not initiated" status to 3 for the "Mature" status.

We multiply the levels' value by the defined weight in each row. The resulting metric is a simple rule of three:

perc = sum_of_all_row_scores * 100 / sum_of_all_the_hightest_scores

It is far from a fancy or complex algorithm, but it works for us.

But… what happens if you change weights or add new parameter rows?

All the parameters and definitions in the expectation matrix are related to our understanding of our teams' context.

The maturity parameters are context variables crucial to making decisions or gathering conclusions at a strategic level.
Context is everything in strategy.

So, for example, in Q3, a team could have a very high score in terms of maturity, but if we introduce a new variable in the Q4 context, that score can change.
And it should change.

Transparency is crucial to making good decisions.

Implementing the first iteration of the maturity model.

Our current implementation is based on tools used as a standard at Packlink.

Atlassian Confluence for the expectation matrix.

We document the process and expectation matrix in Confluence. It allows us to have conversations directly in every aspect, definition, and expectation level of the maturity model.

Google Forms for filling the team's score.

Google Forms allows us to register each team score in a Google Spreadsheet that we can use to analyze the status of each team in every maturity category in a concrete quarter.

Google Datastudio to visualize the overall status of the teams.

We use Datastudio to visualize the maturity status of every category.

We can see the overall maturity score, filtering by a team, quarter, and product stream (a cluster of teams).

overall view, the data is invented
teams view, the data and team names are invented

Maturity indicators proposals process.

One of our requirements for the framework is that it needs to be extensible, which means that new indicators should be added, removed, or evolved.

We defined a process to propose new maturity indicators; all the people in the engineering and product teams can propose, comment, and vote on new indicators.

This is a powerful way to engage the people with the process and socialize it.

Lessons learned.

Although the framework has been only "in production" for a quarter and a half, we have learned a lot during the implementation process.

The following are some of the main aspects we learned.

Words and terms meanings usually have powerful impacts.

In the beginning, sentences like "this team are not mature enough" or "this team is the most mature right now" were frequently in our conversations.

We feel that that was not right, mainly because without enough context, other people can associate a low level of maturity with something negative that implies lousy performance, which was not the case.
We understand the comparison between teams regarding their maturity levels as toxic and a cultural antipattern.
Each team has its context and needs to be understandable. Some factors that imply a low level of maturity can be out of the team members' control.

It is crucial to ensure a shared understanding of the meaning of a concrete term, wildly when a term can be misunderstood.

When we started to specify the different levels of maturity in the framework, we had a lot of conversations related to the proper initial level.
The levels were:

Not yet < initial < in path < mature

Initially, we discarded terms like "unmatured" because of the negativity of their meaning.
We had an exciting conversation to ensure a term that expresses a status that needs improvement as fast as possible while not implying extreme negativity.

Finally, we decided to change the Not Yet term in favor of a less negative one: Not Initiated.

Data-informed vs. data-driven.

We wanted to make visible the maturity of the teams and their evolution to make better decisions and better understand the engineering organization's conditions. But it should not be the only input to make decisions or gather conclusions.

We like the data-informed approach instead of data-driven, especially when understanding situations that imply people and teams.

Iterative mindset.

One of the main reasons to implement an extensible framework is to extend the maturity model with new indicators. Following a lean approach, we needed to deliver its minimum version fast to gather conclusions as soon as possible.

We set the minimum useful indicators to visualize the core cultural and operative aspects that affect our decisions, especially structure and new engineers' onboarding.

We detected many pain points in the engineering teams and could design a better structure that solved many of them, leveling up the maturity level of the entire tech department.

Maturity expectations are not static; they need to evolve and change.

Especially in a continuous improvement culture, the expectations are changing, and the bar is raised continuously.

A maturity status in a particular time will not be mature enough when we have interiorized new practices and learned from the experience.

With this framework, we had clear that the metric is not the goal. The goal is to ensure a clear guide for our teams to understand the shared expectations of all the engineering and product areas and become owners of their own decisions to reach a concrete maturity level, without obsessing to reach a metric that can change when the context change.

Conclusions.

With the Maturity model, we can ensure a shared understanding of what we mean by a level of maturity in a team, and the teams can make concrete plans to improve their maturity using it as a guide.

References.
During the design of the maturity framework, the following resources were especially interesting:

--

--

Asier Marqués
Packlink Tech

Since the last few years, Asier has assumed senior engineering leadership, management, and direction roles at different tech companies.