A couple of diamonds and Teresa Torres

Matheus Winter Dyck
Babbel Design
Published in
6 min readAug 8, 2022

Mashing up two product discovery frameworks and what I learned doing it for the first time. Part 1

Illustration of flag symbolising success in discovery

When people talk about product and design frameworks I get slightly irritated. Double Diamond, Triple Diamond, Design Thinking, etc. There are only so many buzzwords I can handle.

While frameworks are massively helpful to guide a process a lot of times these discussions are held as if frameworks are an end to themselves. The fact that actual user problems are only solved when these frameworks are put into practice, seems to be ignored.

You could come up with the smartest way of discovering solutions, but unless you get your hands dirty and test it through application, it’s pretty much useless.
Ok. Rant over.

Over the last four years I’ve worked in both a small and fast startup as well as a large 800+ employee company. During this time, I’ve encountered various ways of going about product discovery, none of which seemed to work perfectly for me and my team. At the startup, resources for proper discovery were scarce. At the big company there was a tendency to deliver and then forget about it.

So for my last project at Babbel, we decided to try out an approach that my stellar colleague, Anna Stutter Garcia, and I created. This would be a mashup of two popular frameworks — the diamond process and continuous discovery. What that hybrid framework looks like is part of this article.

In Part 2 I will share mistakes we made while applying the framework and how we’ll do better next time.

Start with the Opportunity

I am a strong believer that product discovery needs to always start with a deep understanding of the opportunity, both from a business and a user point of view. Only when you really understand an opportunity in all its facets can you design a solution that will have a positive impact on the business and the user.

In the triple diamond design process that I used in the past, or its simplified version, the double diamond, the part of understanding the problem/opportunity space is represented by the first diamond: You first diverge, and try to understand the entire problem space through generative research. Then you converge and define one clear and precise problem statement for the solution discovery through evaluative research.

Zendesk’s triple diamond design process
Zendesk’s triple diamond product development process

Linear progression? Ehm.. no, thanks

After the definition of the problem, the visualization of the next diamonds suggest a linear progression all the way to implementation.
You first diverge by exploring multiple solutions and approaches, converge to choose and validate one solution, and then move right into development.

Such a linear approach is problematic for three (and probably more) reasons:

  1. Validation Test
    In my experience, in this framework, the step of validation, often looked like simply doing a quick and dirty usability test. So the validation ended up being only a step of confirming our own assumptions rather than about truly learning whether they are true in all aspects. The framework’s illustrations seems to be suggestive of such a process.
    The Norman Nielsen Group has a super insightful article about the difference between validation and testing.
  2. Bad testing increases the risk of failure
    A simplified validation of ideas does not account for the numerous underlying assumptions we have about a solution’s feasibility, viability and desirability. The more unchecked assumptions we have going into the development, the higher the uncertainty of success. That means that a team might spend several sprints working on a feature even though they don’t have the evidence and confidence that it will solve a user’s and the company’s goal. Multiply this by the number of features a team develops and teams a company has, and there you have your huge opportunity cost of bad testing!
  3. The iterative nature of product discovery is not well represented.
    Just because a design is added to the engineers’ Jira ticket or released doesn’t mean that the feature is creating value and the project is over. In most cases, you’ll obtain insights after the release that will inform future iterations. A linear process does not accommodate that.

Add some experimentation to it

The role of assumptions

So while the diamond approach is helpful, we adapted the process slightly in the second diamond, based on Teresa Torres’s approach of continuous discovery, to accommodate the points mentioned above:

After diverging and coming up with a lot of ideas (like 20–30) and choosing one solution that addresses the opportunity/problem, we started by checking the assumptions that need to be true, in order for the solution to be successful.

Through exercises such as the pre-mortem and the analysis of the proposed user journey, we identified where we were making assumptions about the feasibility, viability or desirability of a solution (the three keys to a successful feature/product).

Example
- Desirability assumption: If we show Babbel users a way to track their feelings after the learning sessions, they will engage with it in a meaningful way.
-
Feasibility assumption: If users tell us how they feel about a learning session, we are able to customize their next learning session to their preferences.
-
Viability assumption: If we show users customized learning sessions, based on their preferences, we will increase the their engagement and hence their lifetime value.

Once we had identified the assumptions it was time to find and map the most critical assumptions.
Not all assumptions are built alike: For some you might have some past research that you can rely on. For some you’ll have experts in the company that can help clarify them. But a lot of times it’s the case that for the most crucial assumptions, those make-and-break, leap-of-faith kinda assumptions, that determine whether the project will succeed, there isn’t enough evidence. And so the focus then goes to finding evidence to either refute or validate those assumptions.

Testing assumptions

The process of testing assumptions, also called de-risking a solution, is done through the isolation of assumptions and devising quick experiments to test those one by one.

Strategyzer has a very helpful list of cheap experiments in their book “testing business ideas” that we liked to consult when thinking of quick ways to validate/falsify assumptions. A good summary about the process of assumption testing can also be found in this article.

As we progressed through several experiments of increasing fidelity (i.e. an in-app survey with 1,400 users, a pen and paper prototype, a technical proof of concept) we gained insights and confidence in our solution. And once we had enough confidence that our solution was likely to be desirable, viable and feasible, we moved forward to implementing a high-fidelity test. That’s normally an A/B-test and has the goal to give us real, quantitative insights that determine whether all users will see this feature.

Side note: “enough” confidence does not mean absolute certainty. I have come to believe that it’s better to err on the side of action than of caution. So if in doubt, I will make an argument to get a solution out to a large group of users rather than running more and more experiments and get lost in discussions in order to gain more non-representative evidence.

After the decision to conduct an A/B experiment was made, the design and development phase began. And this is still truly a part of the discovery process. How so: They are just one more step to finding out whether the devised solution will truly address the opportunity and solve the problem.

And that’s why I don’t consider the handover of designs to product managers or engineers the end of the discovery either. It’s only the end of one cycle. The next cycle of iterations begins as we gain insights about how users respond to the A/B-test.

Depending on the results of an A/B-test we decide to either stick to the solution and improve it. Or we decide to pursue a different solution for the same problem. Or we move on to the next opportunity space.

A tool that has been massively helpful to guide us through these discussions and decisions is the opportunity solution tree. I’ve been using it in different companies at this point and it brings a lot of clarity and support for product teams.

Go over here to read Part 2!

Huge shoutout to my partner in crime and collaborator on this framework Anna Stutter Garcia ✨ !

--

--