Launching and measuring success for new product bets

Tanguy Crusson
Atlassian Product Craft Blog
7 min readJun 2, 2022

The thing I appreciate the most working at Atlassian is how we value moving with urgency: the opportunity we have is massive in every direction we look, and we have no time to waste. But I remember very vividly a discussion with Mike Cannon-Brookes, who is a big driver for this sense of urgency in the company. His advice was: “be patient”. That definitely took me aback for a moment, it felt very counter-intuitive, isn’t Mike the person always saying we don’t move fast enough?

Move with urgency. Be patient… Which one is it?

But then I got it: yes, as product leaders we need to move fast, but at the same time we have to be methodical in not rushing things, as this can squander our opportunity. I’ve been working on early stage bets at Atlassian ever since I started here 8+ years ago, and I’ve developed a bit of a step by step recipe in how to test them before going big. There are two things we’ve been using in my teams that I’ve come to believe are key and that I’d like to share in this blog. Basically my approach for early stage bets every time has been:

  1. Prove that your product gives value to a small number of users
  2. Then, and only then, prove that your product gives value to a large number of users
  3. Finally, focus on distribution

In the product I’m working on now, this was: prove the product is useful to 10 teams, then 100, then 1000 companies — before focusing on distribution.

Prove value to a small number of users

In the first phase of early stage bets you should always plan to only work with a small number of users. The reason for this is that it’s about unpacking a problem and testing solutions, iterating fast, which is a process that’s best done with a small number of users/customers that feel the problem the most, and get the solution right for them. It’s easier and much more focused to do it this way than “throw it out there and see what sticks”. Users who feel the pain the most will be happy to work with you and remove the need to pile on untested assumptions.

As you grow in confidence with the solution it can be tempting to just open the floodgates and let everyone use the product. It’s a bad idea though. Because we’re Atlassian we have the reverse problem to most startups who need to work hard to get users: many people will try what we ship just because it’s from Atlassian. But, like for those startups, if it’s too early and first impressions aren’t good they won’t stick around and we’ll need to work hard to get them to try again.

If you’re a product manager working on new bets in an established company I found this illustration particularly awesome:

Credit: Reforge via Preet Anand, this illustration is brilliant

The idea is to limit the number of users who get access to your products. But how do you demonstrate progress to your leadership though if you’re not reporting on evaluations/MAU/etc. ? You need to find the metrics that best represent what you’re trying to prove. In our case we started with the following for the private preview of the product:

10 active teams have been using the product for more than 3 months and plan to continue using it when we enter beta

Then for alpha we used the following milestone:

Product market fit score of 40% or greater with 100 active teams

For beta we’ve used:

Product market fit score of 40% or greater with 1000 active customers

There are 2 aspects to it: quantitative, and qualitative.

Quantitative

The number of people should match what you believe is necessary to prove value at your current stage, and not more. Initially you can handpick who you invite after doing some qualification, and progressively invite users in bulk from there when you want to test things like onboarding and conversions. Even as you start scaling make sure you choose how many people you add, keeping the safety funnel in mind.

In my case, the goal every time was to get to one level of magnitude higher: 10 → 100 → 1000 (active teams)

Qualitative

For the quality score I’ve explored many different approaches that didn’t quite work for early stage bets, until I came across the product-market fit score. It is a brilliant metric because it’s a process that focuses mostly on the qualitative aspects, but yet gives you a directional number you can share with leadership. I discovered it here a few years back and used it ever since:

It’s based on a survey you share with your users asking them a few questions, and especially this one:

How would you feel if you could no longer use [product]?

A) Very disappointed B) Somewhat disappointed C) Not disappointed

Your product-market fit score is A/(A+B+C) and you want it to track north of 40% (in our case: 50–60%). This number is good to share and focus to improve, and the way you do that is by focusing on the survey responses — and the followup conversations you have with users based on those.

The reason this score is brilliant is that it’s very lightweight and you do get value based on a small number of responses. Asking this question can trigger an emotional response: we’ve had people reaching out to us asking us to please, please, please not take a product away from them. That’s exactly what we wanted to validate: not if they were just happy with the solution, but if it not being there anymore would be an issue — a clear sign of problem/solution fit.

Prove value to a large number of users

At some point you become ready to share with more users, in our case it was when we reached 50% PMF score for more than 100 active teams. How do you change your metrics? First, you can keep the 2 metrics from the first phase and they will act as your primary metrics — the ones that reflect the health of the product. But it now makes sense to add secondary metrics — the ones that you can try to move in your day to day with changes to the product — to help you break down how you approach bringing in more users. That’s where pirate metrics (AARRR) come in:

Credit: the amazing Sten Pittet @ tability.io

The first time I looked at this my belief was that we should start optimizing for retention first, and minimize churn. Although it was tempting to just invite more users, if we have a “leaky bucket” and people churn a lot I’ve seen that the impact to net growth can be pretty massive. This happened to us back on Hipchat and I still wear this battle scar 💔

Recently I just saw this blog from Sten Pittet, an ex (awesome) Atlassian, who does a fantastic job to explain why that is the case:

Here’s the TLDR:

It’s tempting to build your business by following the steps of the AARRR funnel, but it’s best to approach your customer journey in this order:

1. Retention: do people find value in your product?

2. Activation: can people set it up themselves?

3. Revenue: is your pricing effective?

4. Referral: can you leverage existing customers to find the next ones?

5. Acquisition: do you know how to land new leads effectively

Credit: the amazing Sten Pittet @ tability.io

Defining and tracking these metrics is not simple, but remember they are just a tool to help you get to success, not an end game. I recommend treating these metrics like your product: start small and iterate, change them based on what you learn.

Start from retention, automate and test the metrics until you believe you have a good handle on how you measure it, then focus on the changes that help you increase it (for example by focusing on reliability > usability > new features). Once that is settled, you can move your focus to activation metrics and just keep a watch on retention. Rinse and repeat. Once you are at the last bucket, you are ready to open the floodgates and spend all your focus to distribution.

Of course this is in an ideal world, in the real world things are a bit messy, but keeping the fundamentals in mind can help you and your teams focus on the big picture.

What are your strategies for measuring success and communicating progress when working on early stage bets? 👇

--

--