Metrics / Execution

Nader Balata
PM Job Kit
Published in
9 min readNov 21, 2020

--

Analytical skills are core to a PM’s role and PMs need to be fluent in numbers. Good product managers develop hero and secondary metrics to make decisions, identify issues and grow their products. As the saying goes: “What you don’t measure, you can’t improve.”

In a set of PM interviews, you’re likely going to have a Metrics / Execution interview. Your interviewer will ask about some of your key product metrics, what metrics you would track for a new feature and how you might use data to solve for product issues such as declining adoption.

Be ready to speak about your past and current product’s key metrics. For more information please see Your Metrics.

First, let’s talk about Pirate Metrics.

The Pirate Metrics methodology for product metrics

There are many tools and frameworks for establishing your product goals and metrics but I think Dave McClure’s Pirate Metrics are the best. Pirate Metrics, also known as AARRR and stands for:

Acquisition — This is where you acquired your user from such as a Facebook ad, YouTube video or even organic search. This stage is your first contact with customers.

Activation — This is your user’s first-time experience and will have an impact on whether they return. In this stage, it’s critical to speed up time-to-value i.e. user completing sign-up, user discovering friends or even playing a game level.

Retention and Engagement — This is the frequency in which your users return to use your product and how many actions/events they perform in your app. If you’re YouTube you might get a user to open the app but if they don’t watch a video it means they didn’t find immediate value.

Referral — Your best users will be your most active users and those that recommend your app to their friends. As an example, if you’re a PM for Mint and a user manages their budgets weekly and then invites a friend via Tell a Friend link to try out Mint that’s a referral. Referrals aren’t always direct from user to user but in other forms like App Store ratings and even word-of-mouth.

Revenue — For products that are monetizable, this is the stage where a user pays for your product or service. A Dropbox PM would optimize the Dropbox app to drive users to upgrade to Dropbox Plus (incidentally there are four different upgrade paths in the Dropbox app).

I like this framework as it captures your user’s full journey and most of all helps you target the areas you need to focus on when building and growing your product.

Pirate Metrics for an MVP

When launching an MVP, Beta or a product experiment, the most important thing is to gain Engagement. Engagement, in this case means high-value events or actions associated with the product or put another way, ask yourself “where is the user getting the most value.”

Example: Imagine you just shipped a new automated Get Help feature. The ideal flow would be:

  1. User taps Get Help button
  2. A list of possible solutions is displayed
  3. User selects the best choice which takes them to a page with more details
  4. At the end of the page, a prompt asks the user if this was helpful
  5. They tap Yes

In this scenario, the app did solve the user’s problem and delivered value. Going back to the question: “where is the user getting the most value” — it would be realized in step 5.

The most important metric here would be an Engagement metric something like: % of Users who tapped Yes (Step 5) / Users who tapped Get Help button

Engagement in this case can also be measured as:

  • Total Events — Get Help button
  • Total Events — Choice option
  • Total Events — Yes/No button

Diving even deeper into MVP metrics you can think of them in this way:

Engagement — Short-term value metric

  • Solves the user’s need
  • User is getting immediate value
  • They know how to use the product
  • Leading indicator of Retention

Retention — Medium-term value metric (Product-Market fit)

  • Signifies product-market fit
  • Leading indicator of Adoption

Adoption — Long-term Metric (Grow)

Revenue(Monetization / Profitability)

Metrics

There are typically three types of Metrics / Execution questions:

  • Tradeoffs — evaluating two potential solutions and determining best one
  • Debugging — determining why a metric changed
  • Goals and Metrics — setting a goal or metric for a given business problem

Tradeoffs

For Tradeoff questions — typically questions where there are two possible options or need to evaluate a new feature launch. Example:

  • How do you decide to launch new social features for Google Maps?
  • You’re the PM for Facebook Stories. Reactions are up 20% but Comments are down 10%. What would you do?

Assuming the goal is to Increase Engagement, my approach would be to conduct two A/B Tests to prove the goal.

A/B Test 1

Control Group — this is the group seeing the current experience or the first of two new experiences (let’s call that Option A)

  • 1-month test
  • 1% of Users shown Current experience or Option A

Test Group — this is the group seeing your new experience or the second of two new experiences (let’s call that Option B)

  • 1-month test
  • 1% of Users shown New experience or Option B

Assume Test group proves higher Engagement. Then re-confirm with another, wider test.

A/B Test 2

Control Group — this is the group seeing the current experience or the first of two new experiences (let’s call that Option A)

  • 3-month test
  • 5% of Users shown Current experience or Option A

Test Group — this is the group seeing your new experience or the second of two new experiences (let’s call that Option B)

  • 3-month test
  • 5% of Users shown New experience or Option B

Assume Test group proves higher Engagement again. Then implement new experience or Option B.

**Notes and Tradeoffs

  • It’s important to factor the mission of the company and if you’re ultimately serving it in your decision. It’s possible that the preferred option may detract or have long-term negative effects.
  • You should validate this with your data science team.

Debug (Metric change)

For questions related to a metric change such as:

  • LinkedIn Connections are down by 10%, how would you figure out why?
  • You’re the PM for YouTube Ads and ad creation dropped 10%, how would you find the problem?

My approach would be to conduct a Debugging exercise to pinpoint the problem. First, it’s important to get the context.

  1. Understand Metric and Timeframe
  • Be clear on what the action really means. For example, if LinkedIn Connections are down does that mean requests are down or accepts are down?
  • What timeframe is the metric change? — Sudden would suggest a glitch of some sort. — Gradual would suggest a larger problem such as user behavior change
  1. Why would this be a problem for [company]?
  • Why is this a problem for the mission?
  • Why is this a problem for the users?
  • Why is this a problem for the company strategy?

Now that we context, we can begin the analysis. I like to approach the analysis in the following steps:

  1. Quick Cuts — identify most common causes
  2. External Factors — identify external factors that could be causing change
  3. Internal Factors — identify internal factors that could be causing change
  4. User Flow Analysis — reviewing the exact user flow

Quick Cuts

  1. Geographical
  • Is this region-specific or happening globally? If it’s regional, we might have to check for recent news with respect to geopolitics, regulations, epidemics, or regional competition.

2. Platform

  • Is this issue limited to iOS, Android, Desktop?
  • Did Chrome make a recent change?
  • Does Safari have new privacy controls?
  • Have you released a new app update recently?

3. Time of the year

  • Seasonal — Christmas or Thanksgiving drop-off?

If it’s none of these then I would do an External Factors Analysis…

External Factors:

  1. User Segment
  • Any specific user segment or demographic? There could be a new product/feature/activity that’s attracting this segment for networking/connecting.
  1. Industry & Competition
  • Is this metric change due to rising competition? For example, did a competitor ship out a new feature or recently launch a marketing campaign that is driving users away.

If it’s none of these then I would do an Internal Factors Analysis…

Internal Factors:

  1. System-wide issues
  • Are any of your product’s overall usage down or been affected? If so, then there is a much bigger problem, and we would need to look at the overall engagement of the entire platform.
  1. Data accuracy of metrics
  • Events not firing?
  • Are the right events being tracked?
  • Are the reports set up correctly — E.g. Flow Analysis — Is Step 2 right?
  • Issue with analytic tool itself
  • Has the metric definition changed

2. Scope

  • Is this affecting other services using the same feature.

3. Product changes

  • UX changes
  • Algorithm update
  1. Product quality
  • Has Performance degraded?
  • Is the app experiencing slower speeds?
  • Is the app crashing more often?

If it’s none of these then I would do a User Flow Analysis…

User Flow Analysis — (Example: Instagram Story Creation is down)

Flow 1:

  1. User taps Instagram app, user jumps to Instagram
  2. User taps Story + button at top
  3. (Instagram checks for Camera and Microphone access)
  4. User creates Story
  5. User taps Send To

Flow 2:

  1. User gets Notification to share a new Instagram Story
  2. User taps Notification, user jumps to Instagram
  3. User taps Story + button at top
  4. (Instagram checks for Camera and Microphone access)
  5. User creates Story
  6. User taps Send To

The key here is to isolate wherein the user flow is the issue. Some examples:

  • If the issue was in Step 3 of Flow 1, then the diagnosis would be there’s something wrong with the Story + button
  • If the issue was in Step 1 of Flow 2, then the diagnosis would be users not getting Notifications which is reducing Story creation.

For debugging, going through the process of Quick Cuts, External Factors, Internal Factors and User Flow Analysis will most likely expose the source of the issue.

Define Goals and Metrics

In Execution / Analytics interviews you’ll be assessed how you identify and prioritize opportunities, and execute against them to build products. The Goals and Metrics portion will focus on how you analyze a set of constraints and problems to come up with the right set of metrics to measure success.

I recommend the following process:

  1. Clarify the context
  2. List Metrics
  3. Summary
  4. Tradeoffs

Example: How would you set a goal for Facebook Comments?

Clarify

  1. Any specific aspect or category of Facebook such as Watch, Newsfeed, Messenger, etc?
  2. Relation to Facebook’s mission
  3. Significance of Comments

Comments — Why are they important:

  1. Enables Commenter to express themselves
  2. Enables Commenter to make connection
  3. Provides additional context to post
  4. Rewards the Poster with engagement
  5. Drives virality
  6. Higher value than passive interactions such as Likes and Reactions

Comments Journey

  1. Someone shares something on Facebook
  2. Someone views it
  3. Someone makes a Comment
  4. Poster is rewarded / Post is higher value to network / Facebook is more valuable

List Metrics

As I think about the possible objectives, the first thing I want to do list of out some common product goals

Engagement — Short-term value metric

  • Solves the user’s need
  • User is getting immediate value
  • They know how to use the product
  • Leading indicator of Retention

Retention — Medium-term value metric (Product-Market fit)

  • Signifies product-market fit
  • Leading indicator of Adoption

Adoption — Long-term Metric (Grow)

Revenue(Monetization / Profitability)

Next, I want to prioritize some metrics:

Adoption:

Total DAP of Commenters — (Cohort of Anyone who makes a Comment measured on a daily basis)

Engagement:

% of Comments / Posts

Retention:

% of DAP of Commenters in Day 1, who also were DAP of Commenters in Day 2

Out of all these metrics I’ll choose Engagement. Here’s why — what makes a post successful / viral / valuable?

  • Views
  • Shares
  • Likes
  • Reactions
  • Comments

Engagement matters because it is the best indicator of value

Retention matters too but more of a long-term measurement

Adoption is the latter concern and can be addressed later

Revenue can also be addressed later

Summarize

North Star Metric: % of Posts with a Comment

  • Hitting this target delivers the highest value for a Facebook Post

Guardrail Metrics

Gameability

  • It’s possible to send Notifications to drive Comments up but too many Notifications can degrade overall Facebook experience. It’s critical to protect against too many.

Quality

  • Too many Comments can prevent would-be Posters from posting, and thus drive lower value engagement

Cannibalization

  • More Comments on posts could impact other Facebook properties such as Watch or Gaming

De-emphasis of other metrics:

  • Not carefully evaluating or assessing Adoption and Retention

--

--