Risk-driven product management (1/3): how I balance validation and agility

Xavery Lisinski
5 min readDec 16, 2021

--

This is part 1 of the 3-part series on the product management approach I cultivate at Elements.cloud, which I call ‘risk-driven’. This article introduces the concept and explains what I mean by ‘risk’.

What is risk-driven product management?

As product managers, we have to deal with feature requests raised by customers, internal stakeholders (sales team, customer success team, marketing team), engineers and finally, our ideas about which problems to solve for our target market.

In an ideal world, we would take each idea through a rigorous validation process. We would do market research, run multiple quantitive and qualitative tests to validate that the problem is worth solving and that the proposed solution is the best it can be, and build an ideal Minimal Viable Product.

The trouble is, the product management backlog (the list of all requests and ideas that have not yet been approved and added to the development backlog) can often be extremely long. For example, at the time of writing this article, our product management backlog has nearly 900 ideas, and it keeps growing with a steady pace of around 10 per week. And our small team of product managers simply does not have the time or the capacity to run it through the ideal validation process (not without angering customers, internal stakeholders and decreasing our agility).

Hence why I designed a risk-driven approach to our product management activities, which I can best summarise with the following visualisation:

In risk-driven product management, you qualify the risk of an idea based on how well evidenced and understood is the customer problem, job to be done and user journey, and then use that risk to determine how much further validation is needed before you feel confident in building a solution.

How to understand risk?

First of all, let’s imagine a widespread cost vs value matrix like the one below (I prefer to put value on the X-axis):

For an idea, we try to assess how valuable it may be (for the customer in a way that can generate value to the business) and how costly will it be to build (time, effort, financial cost, opportunity cost). This allows us to understand the value vs the cost placement of an idea.

If you do it for many ideas, you can end up with a mental or physical picture like the one below. It seems simple, right? We can quickly identify ideas worth implementing and those not worth the effort.

However, the picture above is misleading. What is missing is the visualisation of risk. By risk, I mean uncertainty about the value and cost hypotheses. Firstly, any solution is almost always more complex to implement than we think (for one, we don’t always know all the dependencies or anticipate technical debt we may discover).

Secondly, and more importantly, we tend to fall in love with our ideas or be misled that just because we would want a solution like that, everyone else will as well. As a result, the actual value of our idea to the target customers and the business may turn out much more minor. If we say that ‘feature X will bring lots of value’, what evidence do we have to back this up? More often than not, it is our gut feeling, imagination or wishful thinking.

Hence why the accurate picture for any new idea, accounted for the level of risk, is more like this:

You can easily see that even when an idea is classified as a ‘big bet’, if we are mistaken about its actual cost (usually higher) or value (usually lower), then the idea may end up being classified as absolutely anything else.

And if we compare it to a different idea, one that may at first seem like a loser, with varying bars of error around it, we can see that both of them have the potential to be both a fantastic success or an utter failure.

Because any idea, with its pretty extensive error bars, cannot be appropriately classified, and more importantly, cannot be that effectively compared with other ideas, the entire exercise of assessing value vs cost is completely pointless…

…unless you work on decreasing those error bars.

The more desirability, feasibility and viability testing you do, the smaller and smaller the uncertainty becomes, and therefore more and more accurate your estimation becomes.

Comparing risk as well as value and cost

With risk-driven product management, neither an apparent ‘loser’ is dismissed out of hand, nor an apparent ‘quick win’ is automatically scheduled for development unless the error bars have been minimised.
The size of those error bars dictates how much time and effort we have to put in de-risking those ideas.

In future articles, I will dive deeper into what distinguishes different levels of risk (at least to us) and what methods or frameworks of analysis we utilise to de-risk ideas. Please leave comments and feedback as I share this framework to stimulate discussion among passionate product managers and validate our approach.

--

--

Xavery Lisinski

VP of Product | Change Intelligence & Hierarchical Analysis Whiz | Igniting Innovation & Catalyzing Growth