Your Team Sucks!? Try This Indicator to Identify Implicit Defects of Your Workflow

Your Agile Coach
Agile Insider
Published in
6 min readJul 12, 2024

--

Data tells you more than known

Your Team Sucks!? Try this to evaluate the collaboration level

An Idea Coming From Cross-Functional Conflicts

Several months ago, my team were working on an internal IT System integration project, which aimed to eliminate the operation cost for different business units. And we needed to collaborate with another technical team because most functionalities were extracted from current product, and we had to depend on them to craft new features. Sometimes they had to developed new APIs for us.

Our responsibility was primary to integrate existing APIs into the new platform, and needed another team to check related APIs for us, or to develop new ones based on product requirements.

However, we found a weird phenomenon.

Another technical team often ignored checking APIs, and delayed new development of APIs, which had postponed our progress. Our engineer felt frustrated with working together with them, and attempted to intrigue conflicts because it had caused significant increase of our released time.

Therefore I started thinking about how to verify whether their internal collaboration is streamlined well, and which metric I should apply to understand the problem?

How to Detect Defects of External Business Units?

As I was thinking about the issue of evaluating the performance of an external team without knowing its real workflow. A sound just came to my mind — Lead Time. Well, I once wrote an article comparing the difference between lead time and story points. To put it simple, lead time is the duration of an item between selected into a workflow and delivered to customers. But how to use it?

At that time although I didn’t work in another team, but I could access the ticket information across all the teams. So I decided to make a big attempt — analyze the lead time data of another technical team over the past half of year in 2023. In fact, we could view their workflow as a blackbox, and only focus on the duration between INPUT and OUTPUT.

evaluation model

I took some time to aggregate the data, as below.

Lead Time Statistics

Basically, I categorized items into several groups, whose deadlines were in the same month. Then I aggregated lead time data for each month, and calculated the average lead times, standard deviations, and the middle values. Finally I plotted the data on a month vs. days table to demonstrate the performance of another technical team.

It was amazing when I visualized the data because it had revealed lots of hidden information, inclusive of delivery capability, delivery predictability, and the real development defects of a team.

For example, the average lead time, which was the blue line, indicated that it had increased by up to 4 times. The index usually represents the overall delivery performance. It was obvious that the delivery capability of another technical team was not good. Even further, the standard deviation had risen up by 8 times, which meant as an item was selected into the workflow, we rarely could predict when it was to be delivered. And The index pointed out the predictability of the workflow. So I was confident to claim that the customer would be dissatisfied with unpredictable delivery, not to mention us.

In this way, I made 2 small summaries. Firstly, the workflow of another technical team had some hidden defects resulting in postponed delivery. And then, we had better eliminate the dependency on them as possible as we could to protect our productivity.

What I’ve Done As Getting The Result?

As I concluded on the statistics data, I decided to take on the API checking stuff on my own, helping my team clarify what kinds of functionalities we should extract from the product, in order to reduce dependency on them. And the conflicts were diminished.

Besides that, the data also improved the internal transparency since members understand hidden defects in another team, and we adapted ourselves to the situation. As a result, the trust were improved between members and me.

I am accountable for removing roadblocks that impede our progress.

Why I Do So?

In my humble opinion, there are primary 3 reasons why I made the decision to detect hidden drawbacks of the workflow by lead time data.

First of all, it provides me a predictor to understand potential issues when it comes to cross-functional collaboration. From the table above, we had understood that its delivery capability was behind normal expectation so we could adapt ourselves to the situation sooner to avoid unnecessary conflicts across teams.

Besides that, once we could get the data up front, unlike this case, we could brainstorm any alternatives to eliminate damages to our team, no matter in collaboration, dependencies, or communication, which conversely saves us lots of resources to handle the problem. If I did not do this, it was probably that we kept putting up with such as working model, right?

Finally, it provides us an opportunity to inspect real defects of a workflow, and further improve it. For example, if I discover similar contexts, I could analyze the corresponding stages in the workflow and limit WIP to improve the flow metrics, or observe which part of the workflow hurts the delivery capabilities.

Pros of The Indicator

For me, there are 3 advantages lead time brings to the table. It is essentially a post-estimation metric that reflects “happened fact”, which is more suitable for tracking progress. In agile manifesto, it says “Working Software Is The Primary Measure of Progress”. As we sampled the lead time, it already represented workable increments have been delivered.

On top of that, it also helps us continuously improve the bottlenecks of a workflow. If we found lead time metrics seem weird, we could step further into each stages to understand potential problems and improve them.

Of course, it could be a good indicator to assist us understand if the cooperated team had mature delivery capability to get things done, and we could adapt ourselves to real situations.

Coach’s Murmur

In this article, I’ve explained my logics of applying lead time to detect potential defects of a workflow. Particularly, I used it to check another team’s delivery capability, and made corresponding adaptations of our collaboration model.

I hope the experience could inspire you to inspect your relationship with other teams when it comes to working together, and take scientific approach to take actions if possible.

If you still have any other questions, feel free to click the link below to have a web call with me.

Let Me Help You

Now I provide free, 1–1 online consulting service. If you have any agile related problems or project management issues, please reserve a web call with me. I would answer your questions as possible as I could.

👉 Book now: https://calendly.com/uragilecoach/consulting
🎁 Anyone who reserves for the web call would be rewarded with a secret gift that helps you grow on project management skills.

If you acknowledge the value I share with you, do as below:
1. 👏 the article
2. subscribe me for latest contents
3. follow me on other platforms for further information
- IG: @ur_agile_coach
- Podcast: Agile Rocket
- Youtube: Your Agile Coach
- LinkedIn: Tsung-Hsiang Wu
- Twitter: @ur_agile_coach

--

--

Your Agile Coach
Agile Insider

Agile Coach | Scrum Master | Podcaster | Author | Change entrepreneurial culture | Subscribe My YT: https://reurl.cc/xlWa0e