The critical role of discovery in product development

Matthew Godfrey
UX Collective
Published in
10 min readJun 12, 2020

--

Back in March (pre-Covid), a group of us attended Jeff Patton and Jeff Gothelf’s Certified Scrum Product Ownership (CSPO). Whilst a number of the concepts shared were familiar, it did lead us to question how these are currently being applied within the context of our own organisation.

In particular - what is the role of discovery as regards the unit of product team, how such teams strike a balance between discovery and delivery, the tools they have to help prioritise their discovery and our general approach to testing ideas and assumptions.

What is Product Discovery?

Firstly, let’s understand exactly what we mean by Product Discovery, which can be defined as:

“Product Discovery describes the iterative process of reducing uncertainty around a problem or idea to make sure that the right product gets built for the right audience. Based on a Product Discovery, a Product Team has higher confidence in their path forward. It is also the foundation for a successful implementation and launch phase later on.” — Tim Herbig

Another way to think of discovery is as an ongoing and continuous stream of activities carried out by a product team (or a subset of) in order to determine, to a degree of confidence, what to build; before they decide how to build it.

It might sound obvious, but every development project/significant piece of work should start with some understanding and analysis of the problem(s) teams are looking to solve. Are they genuine problems that your customers routinely experience? Are they painful enough such that they would value a solution? Does solving this problem pave the way for your current strategy?

It’s all too easy to jump straight to the first, favourite idea we have, for a problem we believe exists. The act of discovery, however, enables us to challenge those assumptions and actively encourage teams to spend more time and effort, explicitly operating in the Problem Space.

Illustration of Redgate’s take on the Double Diamond framework.
Redgate’s take on the Double Diamond framework.

Discovery serves to increase teams’ confidence in pursuing both the right problem and the right solution to that problem; increasing their odds of shipping capabilities and features that customers will value and use.

Discovery, therefore, is how any modern product team mitigates the risks of acting on and building out ideas that:

  1. Customers won’t value or use (desirability)
  2. Technically we’re unable to implement (feasibility)
  3. Don’t drive or enable our commercial objectives (viability)

Discovery eats assumptions and conjecture for breakfast!

How do we balance discovery and delivery?

At this point, we were asking ourselves “don’t we already do this…or something like it?”. The answer was ‘yes’…at least to some degree. However, where we have often struggled (and I suspects the same is true of many organisations) is in establishing a healthy balance between discovery and delivery.

Inevitably some organisations, and more granular still teams, will do this better than others; where the best examples would seem to be those where the team’s discovery efforts are planned, visible and budgeted for alongside delivery activities. In these teams, discovery is a first-class citizen.

To manage this balance and ensure time and space is afforded to discovery, Jeff Patton, Marty Cagan and others in the Agile/product space, have been big proponents of an approach to product development referred to as Dual-Track Development or Dual-Track Agile.

Jeff Patton’s illustration of the Dual-Track model
Jeff Patton’s illustration of the Dual-Track model

The premise is that all product development effort fits into one of two streams (Discovery and Delivery) and that these streams should run concurrently and continuously. In a pre-Agile world, this kind of knowledge work would have typically been done upfront and ahead of any development effort; effectively resulting in lengthy specifications and design being handed over the fence.

Bad right? We all know that story of a costly project that was finally shipped to the customer, late and over-budget, only to realize it didn’t solve a problem they cared about, or the design was so fundamentally flawed that the result was a product that was simply unusable. Ouch!

Waterfall development, for the majority of software organisations, became a thing of the past and Agile/Lean promised a new way forward, where teams would work to deliver software in smaller, iterative loops. Regular feedback from their customers (closing the loop) would allow them to learn fast and change course as needed.

However, at the other extreme, a pure focus on delivery, even in an Agile setting, is unlikely to be the best approach to determining what we should build. There are many scenarios where delivery is the right approach, but where there is a high degree of risk and uncertainty, writing code is an expensive way to learn.

As Jeff Patton says:

“The most expensive way to test your idea is to build production-quality software”.

So, this is where discovery comes into its own; ensuring there is some amount of research and design done upfront to test your riskiest assumptions in the cheapest way possible; giving teams the confidence to know where to spend their development efforts.

Who is involved in Discovery?

Dual-Track presents a model where a subset of the team is focused on discovery, whilst others double-down on writing and releasing production quality software. Those working in the discovery track inevitably run slightly ahead of those in delivery to:

  • Understand and define the problems to be solved
  • Form testable hypotheses and declare their assumptions
  • Explore ideas for potential solutions to these problems
  • Run experiments to test their ideas and prove/disprove their hypotheses
  • Create designs and assets ready for production
Tim Herbig’s discovery team model
Tim Herbig’s discovery team model.

In this illustration from Tim Herbig, he shows a model for discovery collaboration, with the concepts of ‘Permanent Collaborators’, ‘Temporary Collaborators’ and ‘Supporter’s. This would see roles like Product Managers, Product Designers and Technical Leads (in our case) spending a greater percentage of their time operating in discovery cycles, with more infrequent involvement (or temporary collaboration) from the wider team and other immediate stakeholders.

An important note here is that whilst the whole team is not directly involved in discovery (orgs still need capacity to ship working software), the wider team must be aligned with and have an understanding of what they are looking to learn next; as well as the bigger themes (or bets) further along the roadmap. Ultimately, decisions made here (upstream) will shape and inform future delivery efforts (downstream).

As such, the most effective teams are not only those who can deliver across both streams but are those who see discovery as a core part of the product development process. Teams have to plan and budget for discovery, just as they do with development activities, so it’s important that they can visualise the flow of this effort and understand the outcomes; namely:

  • What are we trying to learn?
  • What did we learn?
  • What are we going to do/not do as a result?

The best examples of a more integrated discovery process are those where teams build discovery into how they prioritise, plan, visualise and review in-flight discovery work. Essentially weaving discovery into the natural rhythm of the team and surfacing discovery as part of their existing team ceremonies e.g. daily stand-ups, sprint reviews, show & tells.

Prioritising Discovery effort

So, having understood the role of discovery, the bigger question is how do we decide where and when to spend our discovery effort? Research, exploration and design, like any aspect of development, is a finite resource; so knowing when to put something into the discovery stream, as opposed to delivery, is an important and essential part of prioritisation.

As per Tim Herbig’s model, discovery and the prioritisation of discovery activities should be a regular and routine part of development conversations. These conversations should bring perspectives from Product, Design and Engineering (AKA Product Trios) to establish a series of hypotheses (strategic and tactical) they are looking to test, in the pursuit of a particular outcome.

Each hypothesis will be loaded with assumptions; some about problems, some about solutions. The job of the group is to establish their perception of risk and impact by gauging their confidence levels; both in terms of their understanding of customers problems and the efficacy of their ideas.

How sure are they that they understand the customers’ problem(s), such that they could reasonably start to explore ideas for how they might solve them? What is the potential risk and impact of putting an idea into delivery, without first testing it with a group of customers?

Jeff Gothelf’s Hypothesis Prioritisation Canvas
Jeff Gothelf’s Hypothesis Prioritisation Canvas.

This leads to another important question: Does every piece of work/project need to go through the discovery process? The answer is: ‘No…not always’.

Remember, this is all about mitigating the risk of making poor or ill-informed decisions. Jeff Gothelf recently published his Hypothesis Prioritisation Canvas, which is another tool in the same category as Assumption Mapping, for helping teams to classify work and determine the best approach for a given opportunity.

So, it follows that for those opportunities that come to teams with a higher degree of risk and uncertainty, these are great candidates for spending some of the team’s discovery effort. In this scenario, there are likely to be a number of unknown-unknowns and a lack of any first-hand evidence; leading to low levels of confidence in either the problem to be solved or how they might solve it (the solution).

On the contrary, for those opportunities where there is a greater deal of certainty around the problem to be solved and where the solution is one that is simple, clear and obvious, then there is a lesser need to spend their discovery effort. Instead, the team might move quickly into production (development), where they are able to deploy an existing, off-the-shelf solution or make relatively minor improvements to some existing aspect of functionality.

As you can see, there is a practice here around analysis and prioritisation, necessary to determine when it’s right and appropriate to apply one approach over another; as well as a spectrum of understanding and confidence necessary to determine the scope of a team’s discovery pipeline.

Problem vs. Solution Validation

It’s worth at this point thinking about your average project. As per the Double Diamond, there are two clear phases of the design process: the Problem Space and Solution Space. When teams set out to achieve a particular outcome, they will have assumptions about the customer, who they are, what problems they currently experience and the value they might place on solutions to said problems.

When declaring their assumptions they should look to delineate those relating to the Problem Space from those in the Solution Space. Early on and at the beginning of a new venture/project, teams will likely have more unknown-unknowns in relation to the customer and their problems; where effort in ‘early discovery’ should be focused on Problem Validation.

As teams get further into a project and have spoken to or observed their target customers, bringing them closer to a definition of the problem(s) they are looking to solve, they will likely start to introduce more solution-based assumptions i.e. the efforts/features/capabilities they believe might address that problem. Here, there are more unknown-unknowns about the form of the solution; where effort in ‘late discovery’ should be focused on Solution Validation.

Jeff Patton’s illustration of scaling problem and solution validation
Jeff Patton’s illustration of scaling problem and solution validation.

Over time, the scope and fidelity of your tests, along with the shape of your discovery efforts will change. Remember, discovery is the act of helping teams decide what to build (“building to learn”) and just as we scale and prioritise our discovery effort we also scale and prioritise the backlog of tests (or experiments) we run. Seeking the right answers at the right time, using the tools and techniques that are most appropriate.

Testing ideas and assumptions

Once the team has declared its riskiest assumptions (either about the problem or the solution) and have identified where they need to spend their discovery effort (remember, you don’t need to test everything), they are ready to put these to test. However, it’s important that the shape and fidelity of a test (or experiment) should be commensurate with the level of risk and reward.

The most accurate test will always be working software, yet we know, this also the most expensive and therefore risky way of validating an idea. But, if the team has a body of prior evidence behind them that is conclusive enough to suggest that they should just do the thing, then the team can decide whether that offsets the risk and determine the size of their investment (a bet).

Giff Constable’s Truth Curve illustrates this nicely. Every idea the team comes up with can be framed as a bet. The team’s best guess, given the evidence to hand, about the likelihood of an idea succeeding (relative to the desired outcome). The model illustrates how teams might scale the fidelity of their tests as the evidence they gather increases their levels of confidence.

When teams lack first-hand evidence from the customer (they are trading in assumptions) and have low confidence in the idea, their test should ideally use the quickest and fastest methods required to learn (e.g. customers interviews and sketches). As their confidence increases, they can start to invest in higher fidelity methods that might enable richer and more conclusive feedback. At some point, the team is confident enough to bet the house, go all in and build out as a scalable solution.

Jeff Patton’s revised illustration of the Truth Curve.
Jeff Patton’s revised illustration of the Truth Curve.

In conclusion, discovery is done best when it is an ongoing and continuous activity, conducted as part of the team’s regular development practices. Teams cycle through a series of prioritisation and planning discussions — just as they would with development work — declaring their riskiest assumptions, the questions they need to answer first and the shape of their next best test.

Credits to Jeff Gothelf, Jeff Patton, Tim Herbig and Giff Constable for thier work in this space and the re-use of illustrations based on Jeff Patton’s and Jeff Gothelf’s CSPO course materials.

Cover image created by rawpixel.com courtesy of www.freepik.com.

The UX Collective donates US$1 for each article published in our platform. This story contributed to UX Para Minas Pretas (UX For Black Women), a Brazilian organization focused on promoting equity of Black women in the tech industry through initiatives of action, empowerment, and knowledge sharing. Silence against systemic racism is not an option. Build the design community you believe in.

--

--