Lessons on Using Team Health Checks

Natasha Ah-Fat
6 min readSep 7, 2021

I’ve been writing a book on Enterprise Agility, and in one of the chapters I talk about ‘structures’ you can put in place to support agile adoption. It got me thinking about how team health checks are a powerful retrospective tool.

They can help a team to recognise how they’re performing, and decide what changes they can make to optimise on their performance. It’s important to encourage teams to find ways to improve and constantly reflect so that they’re held accountable, and understand or leverage the power and control they have within their team.

However, I wanted to share a simple experience I once had when introducing health checks and why it pays to be cautious with making comparisons. I’ll highlight the pros and cons of comparing, and why it’s important to understand the detail behind health checks.

What are Team Health Checks?

Team health checks are usually based on a set of criteria, each measured on a sliding scale. Teams then regularly self-monitor how good or bad they’re performing against each criteria and then work towards improving or maintaining good performance.

The team usually has control over deciding the criteria, although sometimes it might be set across a number of teams in the same area. These might include behaviours, values or metrics. For example, psychological safety, fun, learning, quality indicators, value, lead time, throughput and so on.

The scale could be numerical (1–5), or based on images (red, amber, green traffic lights, or even emoticons), but most importantly, the team self-assesses. This encourages team accountability and promotes ideas of empowerment and nurtures a growth mindset through the idea of optimisation and continuous improvement.

Why it Pays to be Cautious

Team health checks are not about who is doing better or worse compared to everyone else, but more about how a team feels about their own performance, and how they’re optimising in their own sphere of power and influence. This is mostly because each team has its own dynamics, environment, boundaries, and conditions to deal with, and besides, self-assessments are mostly subjective.

However, sometimes the problems teams face are outside of their control or experience, and so teams will seek external support to help them progress. This usually happens across broader levels, where dependencies and relationships can become more complex, and may impact several teams.

Many managers or leaders will use synthesis to look at patterns of change or similarities between teams, and think more holistically about the leading constraint across broader areas — we use this idea in lean practice, since according to constraint theory, a system can evolve no faster than its slowest integration point. The faster the whole system can be integrated, the faster the whole system knowledge grows.

So why does it pay to be cautious when looking at comparisons?

Several years ago, I worked on a digital programme that had started using team health checks. Initially we had great participation and interest from teams, that is, until we started paying attention to and investigating lead times.

Lead Times That Led to Trying Times

Our intention for looking at lead times wasn’t to directly compare them between teams and spur any kind of competition; rather, having recently moved to agile and leaner ways of working, the data was meant to provide tangible statistics on performance improvements for the whole programme.

This was incredibly useful and showed our investment in the transformation was leading to more value being delivered to customers, faster — and it was improving as time went on.

Unfortunately, the health checks then became demotivating and seen as a chore for the teams that had significantly higher lead times compared to other teams. Over time, these teams became less respondent to carrying out health checks and more frustrated with the idea of optimising.

After a large-scale retrospective, we realised our intent for the comparisons hadn’t been communicated well — that is, we were actually only interested in lead times against the overall programme that year, versus previous years, and not with comparisons between the teams.

Nor had we initially explained that the lead time analysis had shown that every team had made improvements on lead time, since changing their ways of working, and this was more valuable to us. In other words, we cared less about the lead times themselves, but more about the improvements made on each lead time — and that was what we wanted to celebrate.

After some honest conversations, and by clarifying our intent and our findings, the teams became less worried and less likely to ‘game’ or positively assess their health checks. They understood the value of investigating their own team’s performance and asking questions of themselves on how to improve.

Health Checks Can Lead to New Discoveries and Allies

We also uncovered a big reason for the growing frustration was that the teams felt they couldn’t optimise any further in their sphere of influence/power and had hit a brick wall. They simply didn’t have the time or experience to deal with the issues.

I revisited the value stream of the programme and its delivery processes and people. I wanted to understand the interconnections surrounding their problems and whether I could pinpoint the leading constraint. I discovered that all the teams with longer lead times had processes imposed on them by the same external team.

This external team had both a resource request and resource allocation process that was causing significant blockers that kept repeating in our teams:

1. The external team couldn’t provide enough highly skilled, high demand resources when we needed them.

2. They were prioritising their requests on a first come, first served basis.

3. The nature of their work was often complex, constrained by lead times, and only several hours or days of effort. Therefore, they spread their effort part-time across many teams, in different programmes, and juggled internal team initiatives when they had ‘down time’.

So not only could we not get the skills we needed, but we also couldn’t control when we could get access to them at the right time.

The solution was simple. I walked through our hypothesis on what was causing our leading programme constraint so that the external team could see the problem more holistically. They were used to working in silos and independently and so the collective impact on our programme had not been clear to them. We also discovered that they also didn’t feel empowered to make structural changes to how they operated.

We explained the impact the multiple issues were having on getting value to our customers and got the external team excited about the way we wanted to work with them and how they could contribute to problem solving it, since they were part of the same ‘system’.

In the end, they understood the idea of limiting their work in process so that they could become more productive. We were able to negotiate full-time resources from the external team that were dedicated solely to the digital programme — this was agreed as an experiment, on a rolling quarterly basis.

This enabled us to gain full control over prioritising our programme workload — which also meant that teams in the digital programme were collaborating more closely to understand each other’s workload and prioritisation, and they could more easily negotiate control over resources. A secondary benefit went to the external team. They started to rethink and restructure the way they operated more widely, which benefited other programmes.

A simple but impactful example was that people they had allocated to our programme could no longer work on their internal team initiatives. Instead, any ‘down time’ was put to better use, such as personal development.

Since the experiment was deemed successful, they began rolling out this approach to other programmes, products and projects. Eventually, this meant that people could be allocated full time on their internal team initiatives, thus channelling more commitment and progress for their own goals.

Parting Thoughts

This story shows us why it’s important to be very explicit in communicating the WHY, “what was the purpose for measuring lead times?”. When people understand the purpose of doing something, then they’re more likely to contribute towards it, feeling valued and proud to be part of something bigger than themselves.

Another learning thread comes from thinking in systems — changes or impacts in one part of a system may lead to chaotic impacts elsewhere. By inspecting your interconnections and bringing together sub-systems to solve a problem, new practices can emerge. As a result, rather than having a chaotic impact, impacts instead can be turned into something positive.

Finally, if a team compares it’s own performance between self-assessments, they’ll feel empowered to make performance improvements and will grow in confidence about their level of control and authority.

It’s important to build an open culture that has a friendly, and inclusive environment. One where questions on output, such as “what have you done?”, are nowhere near as compelling as questions on improvement, like, “what has changed, what did you learn, or what support do you need?”.

So it can be useful to look comparisons, as long as we remain cautious. Instead, when you focus on the patterns of change or similarities between or within teams, keep asking the right questions, and maintain curiosity.

--

--

Natasha Ah-Fat

Tash is a Transformation Lead and Agile Coach. She’s passionate and curious by nature, loves working in complex domains and values learning and helping others.