How we measure impact in Product Operations
Knowing whether our work is making a difference
As Product Managers, when we release a new feature, we typically measure success using built-in product analytics. We often have a baseline number and we measure improvement on that baseline. For example; To what extent did our conversion rate increase after the new onboarding tool was released?
However in Product Ops, the problems we work on and the form the solutions take are highly varied, so it’s not easy to identify a one-size-fits-all way of measuring impact.
What types of work do Product Operations carry out?
Before jumping into how we measure impact at FreeAgent, it’s worth recapping on the varied types of outcomes we pursue in Product Operations. The diagram below briefly summarises this for the FreeAgent Product Ops team:
So how do we consistently and accurately measure the impact of this work?
There are various measurement challenges we need to solve:
- Outputs are highly varied in form. For example, some projects focus on improving processes (which could be based in Google Sheets, Notion, Looker etc), some on identifying new Product opportunities and some on shaping strategy. It’s difficult to measure all these in the same way.
- Solutions often aren’t quantitatively or automatically trackable. In Product Ops, our solutions are varied in form and are often not digital — and even if they are, usage is often not tracked quantitatively or automatically. For example, if we make improvements to the way Product Managers report on OKRs, with the aim of helping them tell a better story of their goals; how do we measure success on that?
- There is often a lag between delivery of our work and impact. For example, if we do a market research project evaluating whether we can help our users with their Jobs-to-Be-Done using AI, and create some product proposals based on our research, it could be a year before one of those proposals reaches the roadmap. How do we measure the impact of that in a given time period, for example a 2-month cycle?
All of this means we can’t measure everything in an automated and quantitative way. It’s going to require a bit of flexibility and a bit of manual input. So where do we start?
Definition of impact
Firstly, we need to define impact. What do we consider a successful outcome for our projects?
At FreeAgent, we’ve defined impact as:
A positive change, action or decision that happens as a result of our work.
To be more specific, we consider our work as having had impact if it results in any of the following 6 impact categories (the table includes an example for each):
We also acknowledge that not all projects will have impact as per the definition, often for good reason. For example, we may have done a valid exploratory market analysis project, only to find that we’re already ahead of our competitors in a certain area, and no further action is needed.
In addition, sometimes we play for the long game, and so we may not have seen impact by the end of the measurement period, but it may come later.
In order to track this, and to generate a quantitative measurement, we have 4 impact classifications. At the end of each cycle, we tag each project with one of them:
We now know what we want to measure — so how do we track it?
At the beginning of every project, we decide what we want success to look like, and how we’ll measure it.
At the end of the project we document the impact we had on completed projects, as per the impact types and classifications above.
See the table below for 7 example projects completed in a 2-month cycle, documenting the impact for each project, and how it was measured.
Our North Star metric is the number of projects that have measurable impact in a given time period (in FreeAgent we use 2-month product cycles). For this cycle, it was 4 impact projects at a 57% impact rate.
Of course, of those 4 impact-projects, some will have more impact than others (which we evaluate qualitatively each cycle), but this method gives us a single metric to help assess, firstly, whether we’re picking the right projects and secondly, how good we are at delivering material value on those projects.
Our KPI scorecard
At the end of each cycle, we summarise all this in a single-page KPI scorecard which looks like this:
In addition to impact, we also care a lot about user feedback and so we send out short project evaluation forms to stakeholders after each project.
Most of the feedback is qualitative, but we do have some quantitative questions in the form too. The measure of these can be seen in the ‘Feedback’ row in the scorecard above.
Lagging indicators
It’s important to note that many of the above are leading metrics; i.e., we consider them predictors of more value being delivered further down the line. Lagging metrics take longer to measure, but we want to remain aware of them. Some examples of lagging indicators we monitor:
- Customer adoption / happiness: Did our roadmap recommendations deliver customer value once released?
- Team effectiveness and happiness: Did our new ways of working save people time and make their jobs easier long term?
- Business value: Did our strategy recommendations help the company better deliver on business goals?
What we don’t count as measures of success
In FreeAgent Product Ops, we don’t use velocity as a measure of success. The reason being, that due to the varying nature of Product Ops projects, and because these projects can often widen in scope and opportunity as we learn more, we would rather focus on producing high-quality, high-impact outputs, than on speed.
Saying that we do have some Team Fitness metrics which we monitor, such as number of projects in progress and average start to finish duration of projects. This helps us understand how efficiently we are working, and helps us identify where there may be removable blockers we can work on.
What next?
We’ve been using this impact tracking system recently and, whilst it’s by no means an exact science, it’s working well for us. However, we expect it will evolve and we’re always interested in learning and improving this process. If you have thoughts on other ways of measuring Product Ops impact, it would be great to hear about them in the comments (or message me on Linkedin).
I hope that was useful!