The Startup
Published in

The Startup

A Recipe to Build, Measure and Learn

How to apply the Build-Measure-Learn framework for product success.

Photo by Isaac Smith on Unsplash

I’ve spent the last year and a bit working on a consumer-facing mobile app as a product designer. When I joined, the product was at a stage where it was ready to scale and optimise, so we needed a framework to ensure we do that. I worked on the platform team of the product, with a BA, PO and PM. As a lover of all things process and efficiency, I wanted a bird’s eye view of our ways of working, and so came about this recipe.

Ultimately there are infinite ways to adopt a build, measure and learn framework, and working in an agile environment means we need to adapt. Think of this recipe as a loose set of principles to guide your team to product success, or as cooking vs. baking. You’re a chef and the end goal is to deliver a product that people will enjoy and will want to come back to.

How do we define build, measure and learn?

🔨 Build: Define the work — the what, why and how
📈 Measure: Monitor the work that’s released
💡 Learn: Make decisions on the work being monitored

This article uses the following terms and abbreviations:

OKR: Objectives and Key Results
RACI: Responsible, Accountable, Consulted, Informed. See the RACI matrix.
JFDI: Just F*n Do It — the inevitable. Updates to branding etc.
Definition of Approved —we meet the product vision
Definition of Ready — the work is dev ready
Definition of Done — the work is dev done
Definition of Releasable — the work is ready to be released

High-level overview

We operate on the dual-track discovery/delivery process. Here is the tl;dr of how we do that:

  1. Add an idea/feature/optimisation to the discovery backlog in Jira
  2. PO/PM to prioritise work brought into the discovery sprint
  3. Designer and BA to work together to investigate and refine epics
  4. Once the Definition of Approved is met, the ticket can be moved to the delivery backlog
  5. Final refinements and Definition of Ready
  6. Prioritise and bring into delivery sprint
  7. Definition of Done
  8. Definition of Releasable
  9. Monitor the work
  10. Report back

In-depth process

Here is the breakdown of what we do at each step of the process.

1. Add an idea/feature/optimisation to the discovery backlog in Jira

An idea can come from anywhere (PMs, developers, users, stakeholders, data..etc) and should be based on solid qual and quant data. In the first instance, we aim to understand what the idea is and why we should investigate it. Ideas are added to the discovery backlog in Jira as epics:

Epics allow us to break down features into user stories to deliver increments of user value in each sprint.

When we add an epic to the backlog, we use the following template:

Hypothesis
We believe that [idea]
For [user]
Will drive [metric]
Because [expected outcome]

Job story
As a [user]
When I [situation]
I want to [motivation]
So that I [expected outcome]

Designs
Link prototype (Overflow, Invision, Figma etc)

Success measures
What are the success metrics? ie; increase retention by 5%, increase conversion by 10%.

Checklist
✅ Stakeholder sign off
✅ Refined with the team

Definition of approved
✅ Purpose understood
✅ Measurable
✅ Project vision aligned

In order to bring the work into a discovery sprint, the “Hypothesis” must be complete to enable the PM to prioritise discovery backlog items.

2. PO/PM to prioritise work brought into the discovery sprint

We use the following methods to prioritise work:

  • Product road maps
  • Split incoming epics and stories across the following categories: 20% support / 20% optimisation / 60% feature work
  • JFDI work
  • Prioritisation matrix
  • OKRs to identify future features
  • Stakeholder requests

When the PM decides what is brought into the discovery sprint, the ticket then enters a kanban workflow. This is what it looks like:

3. Designer and BA to work together to investigate and refine epics

A combination of the following methods are used to refine epics:

  • Lean user research
  • Data analysis
  • Workshops
  • Refinement with the squads
  • Stakeholder input
  • Rapid prototyping
  • …and many more

Design and discovery methods are endless and dependent on what you’re discovering so I won’t list them all there. We refer to a playbook when choosing a method.

4. Once the “Definition of Approved” is met, the ticket can be moved to the delivery backlog

The following requirements must be met before progressing the work:

Purpose understood
The business and customer need is understood, quantified and proven

Measurable
The work is measurable, and we know how it will be measured

Aligns with product vision
The work is clearly aligned with the product vision

When the “Definition of Approved” is met, the epic is moved to the delivery backlog in Jira. If we can’t check off the requirements, we decide not to progress with the work.

5. Final refinements and “Definition of Ready”

Once the epic has moved to the delivery backlog, the following checklist is completed to ensure it meets the “Definition of Ready” for a future delivery sprint:

✅ The ticket has a user story, acceptance criteria and has been refined
✅ Risks, assumptions and dependencies are documented at story level
✅ Designs are completed, linked in story level and usability tested
✅ Designs have dev input and are signed off by the PO/PM
✅ The whole team has estimated the story
✅ Subtasks are assigned and linked to the story
✅ The team understands the purpose of the work

6. Prioritise and bring into delivery sprint

Prior to sprint planning, the discovery team meet up to prioritise what is coming into the next delivery sprint. This ensures our sprint planning session is run efficiently with the delivery team the next day.

We use methods such as the prioritisation matrix and similar to what I mentioned in discovery planning, we follow a 20% support / 20% optimisation / 60% feature work breakdown for delivery sprints too.

7. Definition of Done

Once the work is in a delivery sprint, the following criteria must be met before release:

✅ All Acceptance Criteria have been met
✅ Code has an appropriate level of test coverage
✅ If technical debt is introduced then it should be logged and refactored later
✅ User story has QA sign off
✅ Event tags in Firebase analytics have been tested
✅ Sufficient logging of new functionality is present (BE)
✅ Approved by design

8. Definition of Releasable

We separated this step from the previous one as they involve different people, and there is more we need to do to ensure it’s releasable:

✅ Dashboards have been added (analytics visualisation)
✅ Load testing
✅ Change request approved
✅ Internal paperwork approved
✅ App store updates (screenshots, description, what’s new)
✅ Regression testing
✅ App upgrade testing

9. Monitor the work

During the delivery sprint, one or more of the following tools are used to monitor the work about to be released:

  • Facebook analytics
  • Sumologic
  • Power BI
  • Firebase

Once the work is released, the epic remains in the “Monitoring” column on the design/discovery board. Here is a reminder of what that looks like:

10. Report back

How do we report back to stakeholders?

  • A bi-weekly newsletter is emailed to stakeholders with key metrics
  • Team review with all squads and stakeholders at the end of each sprint
  • Demos of work implemented in that sprint
  • Sprint performance (# of story points and burndown charts)
  • If we’re reporting on a large piece of work, we might create a slide deck and present this back to stakeholders with findings
  • We decide who is accountable for the reporting depending on who has owned the work from the beginning. The RACI matrix could also be useful to decide this
  • Fortnightly reporting catch-ups with the discovery team where we use a spreadsheet to monitor the work

When do we report back?

  • When we have statistical significance*. There really is no timeline, as it’s highly dependent on the feature or optimisation being monitored.

*If results are inconclusive, this still tells us something as we’ve proven the hypothesis wrong. Success doesn’t always have to mean we’ve achieved the success metric. The key is learning something about our users.

Conclusion

Our framework is always evolving and improving over time. There are also frameworks within frameworks we follow such as the AARRR funnel and the Google HEART framework. I of course did not come up with our build, measure and learn framework alone. I work with a legendary team who are the reason behind our product’s success.

--

--

--

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +756K followers.

Recommended from Medium

Product-Market Fit

Practical guide to finding your path within product management

Feature kick-off with mind maps and surveys

This Feature or That Feature — What Belongs In A Successful Product?

How can we measure the performance in an agile environment?

It is always a team result, never alone

Product Owner Tips

Collecting the Right Data and it’s Challenges or Data-Driven Product Management

Week 5: Idea Validation/ MVP

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Karla Fordham

Karla Fordham

Product Design Lead at Rush Digital

More from Medium

Where does design fit into the development life cycle?

The 90% Untrue but 100% Useful Origin Story of the “Spike” in Agile

Product Owner- About transparency and communication while starting working on a new product.

Incremental and iterative way of Product development