Mastering flow metrics for Epics and Features

Nick Brown
ASOS Tech Blog
Published in
10 min readAug 25, 2023


Flow metrics are a great tool for teams to leverage for an objective view in their efforts towards continuous improvement. Why limit them to just teams?
This post reveals how, at ASOS, we are introducing the same concepts but for Epic and Feature level backlogs…

Flow at all levels

Flow metrics are one of the key tools in the toolbox that we as coaches use with teams. They are used as an objective lens for understanding the flow of work and measuring the impact of efforts towards continuous improvement, as well as understanding predictability.

One of the challenges we face is how we can improve agility at all levels of the tech organisation. Experience tells us that it does not really matter if you have high-performing agile teams if they are surrounded by other levels of backlogs that do not focus on flow:

Source: Jon Smart (via Klaus Leopold — Rethinking Agile)

As coaches, we are firm believers that all levels of the tech (and wider) organisation need to focus on flow if we are to truly get better outcomes through our ways of working.

To help increase this focus on flow, we have recently started experimenting with flow metrics at the Epic/Feature level. This is mainly because the real value for the organisation comes at this level, rather than at an individual story/product backlog item level. We use both Epic AND Feature level as we have an element of flexibility in work item hierarchy/levels (as well as having teams using Jira AND Azure DevOps), yet the same concepts should be applicable. Leaving our work item hierarchy looking something like the below:

Note: most of our teams use Azure DevOps — hence the hierarchy viewed this way

Using flow metrics at this level comprises of the typical measures around Throughput, Cycle Time, Work In Progress (WIP) and Work Item Age, however, we provide more direct guidance around the questions to ask and the conversations to be having with this information…


Throughput is the number of Epics/Features finished per unit of time. This chart shows the count completed per week as well as plotting the trend over time. The viewer of the chart is able to hover over a particular week to get the detail on particular items. It is visualised as a line chart to show the Throughput values over time:

In terms of how to use this chart, some useful prompts are:

What work have we finished recently and what are the outcomes we are seeing from this?

Throughput is more of an output metric, as it is simply a count of completed items. What we should be focusing on is the outcome(s) these items are leading to. When we hover on a given week and see items that are more ‘customer’ focused we should then be discussing the outcomes we are seeing, such as changes in leading indicators on measures like unique visits/bounce rate/average basket value on

For example, if the Epic around Spotify partnerships (w/ ASOS Premier accounts) finished recently:

We may well be looking at seeing if this is leading to increases in ASOS Premier sign-ups and/or the click-through rate on email campaigns/our main site:

The click-through rate for email/site traffic could be a leading indicator for the outcomes of that Epic

If an item is more technical excellence/tech debt focused then we may be discussing if we are seeing improvements in our engineering and operational excellence scores of teams.

What direction is the trend? How consistent are the values?

Whilst Throughput is more output-oriented, it could also be interpreted as a leading indicator for value. If your Throughput is trending up/increasing, then it could suggest that more value is being delivered/likely to be delivered. The opposite would be if it is trending downward.

We also might want to look at the consistency of the values. Generally, Throughput for most teams is ‘predictable’ (more on this in a future post!) however it may be that there are spikes (lots of Epics/Features moving to ‘Done’) or periods or zeros (where no Epics/Feature moved to ‘Done’) that an area needs to consider:

Yes, this is a real platform/domain!

Do any of these items provide opportunities for learning/should be the focus of a retrospective?

Hovering on a particular week may prompt conversation about particular challenges had with an item. If we know this then we may choose to do an Epic/Feature-based retrospective. This sometimes happens for items that involved multiple platforms. Running a retrospective on the particular Epic allows for learning and improvements that can then be implemented in our overall tech portfolio, bringing wider improvements in flow at our highest level of work.

Cycle Time

Cycle Time is the amount of elapsed time between when an Epic/Feature started and when it finished. Each item is represented by a dot and plotted against its Cycle Time (in calendar days). In addition to this, the 85th and 50th percentile cycle times for items in that selected range are provided. It is visualised as a scatter plot to easily identify patterns in the data:

In terms of how to use this chart, some useful prompts are:

What are the outliers and how can we learn from these?

Here we look at those Epics/Features that are towards the very top of our chart, meaning they took the longest:

These are useful items to deep dive into/run a retrospective on. Finding out why this happened and identifying ways to try to improve to prevent this from happening encourages continuous improvement at a higher level and ultimately aids our predictability.

What is our 85th percentile? How big is the gap between that and our 50th percentile?

Speaking of predictability, generally, we advise platforms to try to keep Features to be no greater than two months and Epics to be no greater than four months. Viewing your 85th percentile allows you to compare what your actual size for Epics/Features is, compared to the aspiration of the tech organisation. Similarly, we can see where there is a big gap in those percentile values. Aligned with the work of Karl Scotland, too large a gap in those values suggests there may be too much variability in your cycle times.

What are the patterns from the data?

This is the main reason for visualising these items in a scatter plot. It becomes very easy to spot when we are closing off work in batches and have lots of large gaps/white space where nothing is getting done (i.e. no value being delivered):

We can also see maybe where we are closing Epics/Features frequently but have increased our variability/reduced our predictability with regards to Epic/Feature cycle time:

Work In Progress (WIP)

WIP is the number of Epics/Features started but not finished. The chart shows the number of Epics/Features that were ‘in progress’ on a particular day. A trend line shows the general direction WIP is heading. It is visualized as a stepped line chart to better demonstrate changes in WIP values:

In terms of how to use this chart, some useful prompts are:

What direction is it trending?

We want WIP to be level/trending downward, meaning that an area is not working on too many things. An upward trend alludes to potentially a lack of prioritisation as more work is starting and then remaining ‘in progress’.

Are we limiting WIP? Should we change our WIP limits (or introduce them)?

If we are seeing an upward trend it may well be that we are not actually limiting WIP. Therefore we should be thinking about that and discussing if WIP limits are needed as a means of introducing focus for our area. If we are using them, advanced visuals may show us how often we ‘breach’ our WIP limits:

A red dot represents when a column breached its WIP limit

Hovering on a dot will detail which specific column breached its WIP on the given day and by how much.

What was the cause of any spikes or drops?

Focusing on this chart and where there are sudden spikes/drops can aid improvement efforts. For example, if there was a big drop on a given date (i.e. lots of items moved out of being in progress), why was that? Had we lost sight of work and just did a ‘bulk’ closing of items? How do we prevent that from happening again?

The same goes for spikes in the chart— meaning lots of Epics/Features moved in progress. It certainly is an odd thing to see happen at Epic/Feature level but trust me it does happen! You might be wondering when could this happen — in the same way, some teams hold planning at the beginning of a sprint and then (mistakenly) move everything in progress at the start of the sprint, an area may do the same after a semester planning event — something we want to avoid.

Work Item Age

Work Item Age shows the amount of elapsed time between when an Epic/Feature started and the current time. These items are plotted against their respective status in their workflow on the board. For the selected range, the historical cycle time (85th and 50th percentile) is also plotted. Hovering on a status reveals more detail on what the specific items are and the completed vs. remaining count of their child items. It is visualised as a dot plot to easily see comparison/distribution:

In terms of how to use this chart, some useful prompts are:

What are some of our oldest items? How does this compare to our historical cycle time?

This is the main purpose of this chart, it allows us to see which Epics/Features have been in progress the longest. These really should be the primary focus as this represents the most risk for our area as they have been in flight the longest without feedback. In particular, those items that are above our 85th percentile line are a priority, as now these are larger than 85% of the Epics/Features we completed in the past:

The items not blurred are our oldest and would be the first focus point

The benefit of including the completed vs. remaining (in terms of child item count) provides additional context so we can then also understand how much effort we have put in so far (completed) and what is left (remaining). The combination of these two numbers might also indicate where you should be trying to break these down as, if a lot of work has been undertaken already AND a lot remains, chances are this hasn’t been sliced very well.

Are there any items that can be closed (Remaining = 0)?

These are items we should be looking at as, with no child items remaining, it looks like these are finished.

The items not blurred are likely items that can move to ‘Done’

Considering this, they really represent ‘quick wins’ that can get an area flowing again — getting stuff ‘done’ (thus getting feedback) and in turn reducing WIP (thus increasing focus). In particular, we’ve found visualizing these items has helped our Platform Leads in focusing on finishing Epics/Features.

Why are some items in progress (Remaining = 0 and Completed = 0)?

These are items we should be questioning why they are actually in progress.

Items not blurred are likely to be items that should not be in progress

With no child items, these may have been inadvertently marked as ‘in progress’ (one of the few times to advocate for moving items backwards!). It may, in rare instances, be a backlog ‘linking’ issue where someone has linked child items to a different Epic/Feature by mistake. In any case, these items should be moved backwards or removed as it’s clear they aren’t actually being worked on.

What items should we focus on finishing?

Ultimately, this is the main question this chart should be enabling the conversation around. It could be the oldest items, it could be those with nothing remaining, it could be neither of those and something that has become an urgent priority (although ignoring the previous two ‘types’ is not advised!). Similarly, you should also be using it in proactively managing those items that are getting close to your 85th percentile. If they are close to this value, it’s likely focusing on what you need to do in order to finish these items should be the main point of discussion.


Hopefully, this post has given some insights about how you can leverage flow metrics at Epic/Feature Level. In terms of how frequently you should look at these then, at a minimum, I’d recommend this is done weekly. Doing it too infrequently means it is likely your teams will be unclear on priorities and/or will lose sight of getting work ‘done’. If you’re curious how we do this, these charts are generated for teams using either Azure DevOps or Jira, using a Power BI template (available in this repo).

Comment below if you find this useful and/or have your own approaches to managing flow of work items at higher levels in your organisation…

About Me

I’m Nick, one of our Agile Coaches at ASOS. I help guide individuals, teams and platforms to improve their ways of working in a framework agnostic manner. Outside of work, my new born son keeps me on my toes agility wise, here’s a recent photo of him and my wife Nisha (to my left) meeting the team…