Creating an effective product analytics plan

Mehul Mathur
8 min readAug 3, 2021

--

Photo by Luke Chesser on Unsplash

In Product Management discipline, the phrase data driven decision making is often used where every PM claiming he or she is a true evangelist of the practice. While companies continue to reiterate their North Star metric for the teams to focus upon, sometimes PMs often find themselves struggling to break it down into metrics that really make it tick and further optimize them.

Many prevalent tools like Google Analytics or Mixpanel do provide innovative ways to capture and analyse user behaviour, though their default or standard tracking metrics only tell a fraction of the story.

We may often find ourselves over-relying on vanity metrics tracked by default in tracking tools with some basic level customized event tracking. Even the most popular frameworks like Pirate Metrics or HEART framework need context setting to be able to provide actionable insights.

I have found that these often define the Whats of the situation, but to answer whether our product is adding value to the users’ lives we need to dig deeper and hence we tend to hypothesize the Whys behind the Whats.

This juncture is critical because the Why deduction could be riddled with various biases of the PM. At this juncture, one may often find himself jumping directly into solutioning if the metrics seem problematic, and that too if we even have a baseline or benchmark to begin with or target figures set by internal stakeholders. Often due to lack of awareness of industry standards of certain metrics, we often accept our figures to be the guiding benchmark.

One of my mentors shared an HBR article, which hits the spot in making a case for customer centric performance indicators (CPI) which are often the underlying levers of the Key Performance Indicators (KPI) by which we measure our company and product growth.

Metrics must reflect not only what we have achieved but also what we have made our customers achieve. If you are running a content website, maybe the time spent on the page or bounce rate of the page aren’t enough. Maybe you need to know whether after spending 5 mins reading the content, your user found the content interesting or has been able to understand or retain whatever he or she has read or whether the user is exploring the topic even further.

If your business case warrants interacting with customers on a regular basis like a B2B application, then you could not be more lucky when it comes to pinpointing the customer’s pains and needs, and at which stage you are in your journey towards product/market fit. For mass consumer products, this becomes entirely difficult because your scope of customer interaction becomes limited and the only way to measure customer success is through well represented product metrics of CPI + KPI not just KPI.

But we must exercise caution because each data point you collect has an additional cost on the system and only adds to the complexities. Hence, to gauge deeper level insights, we need a lean effective tracking plan that accurately captures these CPIs and KPIs. Below is the approach that I follow to simplify and structure my efforts in brainstorming in an iterative manner:

Product Analytics planning approach

Asking the right questions or Building hypothesis

I have found asking myself basic questions around product success or possible pain points to be really helpful to find the right metrics to focus upon. Often it cannot be fulfilled by default tracking features of prevalent analytics tools and we may have to leverage their advanced custom tracking features.

The first step towards planning advanced tracking requirements asking what behavioural aspects do you consider vital and truly representative of your customer’s success or their pain points. These behavioral aspects are your typical big-picture, simple-wording questions you ask yourself or your internal stakeholders for direction.

Some examples:

Which type of content makes us appear trustworthy?

Are our product images interesting enough to stimulate conversions? And which type of images are more effective?

Are our sellers finding value in the seller application?

Deriving explainer metrics for each question / hypothesis

Each of these questions can be answered by a combination of Focus, L1 or L2 metrics as Mixpanel famously like to call them.

Which type of content makes us appear trustworthy?

This may be answered by following metrics or methods:

  • Content consumption levels: reading or engaging
  • Traffic growth rate
  • Repeat visits
  • Shareability
  • Conversions
  • Growth of other content types due to some
  • Social listenings
    And more.

Are our product images interesting enough to stimulate conversions? And which type of images are more effective?

This can be answered by following metrics or methods:

  • Time spent viewing each image type
  • Testing causation or correlation between image engagements and conversions
  • Following immediate conversions
  • Steps before conversion
    And more.

Are our sellers finding value in the seller application?

This can be answered by following metrics or methods:

  • Product stickiness (DAU vs MAU or DAU vs WAU or WAU vs MAU)
  • Inventory turnover
  • Inventory growth rate
  • Sales growth rate
  • Margins growth rate
  • Category additions
    And more.

We may often find ourselves painting a partial picture with readily available metrics such Users, sessions, pageviews, DAUs, WAUs, MAUs, Bounces, Exits, Session duration. But it is clear from above examples that merely these high-level readily available L1 metrics are usually not enough to monitor your product’s health. Your seller partner application may have a great stickiness figure i.e. sellers are visiting the application frequently which may lead us to think we are doing a great job. But if he or she is neither adding higher inventory levels nor turning it frequently, then they are probably neither able to derive greater value nor getting empowered to do more.

This leads us to the next part of the problem, the need for more detailed and product-specific tracking requirements. Many people do warn against the practice of over-tracking due to technical complexities, redundancies and tool limitations. In my opinion, your analytics requirements can be termed over-tracking only when the tracked behaviour isn’t related to core product functionalities or features and also when there is no underlying question to answer or hypothesis to prove/disprove.

Auditing existing implementation and assessing gaps

Once we have an understanding of what we need to track to answer our questions, we need to see if we can make use of default tracking schemes of the tools and also existing tracking schemes. Further, we will need to fill the gap between required tracking needs vs existing tracking. Often for L1 or L2 metrics, we would need help from our internal technology team to implement because they’re complex and specific to our use case and the auto-tracking of third party tools may not capture them on their own.

Creating a tracking plan and choosing the right tool

There are some guidelines I follow while making requirements for analytics or a tracking plan:

  • I recommend using a tag management tool to log tracking to one or more analytics tools. Otherwise, for example, if we are using both Google Analytics and Mixpanel both of which have different payload formats, then it would become a nightmare for the team to implement and test for each tool. Hence, I recommend using datalayer (for web pages) or firebase variables (for mobile application) and using Google Tag Manager to trigger these events to any or all analytics tools directly through tag manager in a plug and play manner.
  • Funnelize sections or components individually to monitor their own exposure vs own engagement levels. For example, if a widget on your main landing page comes significantly down the page, then perhaps its engagement level shouldn’t be compared against the number of visits on the page but rather it should be against the number of users who are viewing the widget.
  • Maintain consistency in your naming conventions and variable schemes. This has the benefit of not just effective readability but also consistent schemes and naming can help in automating reporting efforts. For example, decide whether you want to a small case scheme: added_to_cart or a more readable Added to Cart and at all the instances use it.
  • I recommend explicitely denoting event names in their true form and their past participle form. Engagement isn’t always defined by ‘Clicks’. For an app like Bumble where one can perform core functions using swiping Left / Right / Up / Down, clicks don’t hold much significance and hence must be reflected in tracking conventions as well. For this example, hence it is important we pass event names in their true form such as Swiped Right, Swiped Up. Similarly, if the user starts watching videos or pauses it, rather than naming it as Button Clicked, we should name them as Video Played or Video Paused. This is because there could be other ways to play or pause a video without the on-screen buttons, for example from your earphone buttons. In that case, you can capture additional information in the event parameters rather than the event name itself.
Defining event names in their explicit form
  • Avoid re-sending auto-collected information. For eg. Page URL or User Device is auto-collected by GA in each hit and need not to be sent in event names. This is important for Google Analytics, because Unique Event count gives out the count for unique combination of Category+Action+Label in a session for a user. Having redundant information in any of the three event fields multiplies the unique event count if any of the added variables may have multiple values, hence you won’t be able to differentiate between total events and unique events. Refer more here. Other than that, it is a good idea to optimize the number of event names in a manner that requires the least number of names or combinations. This is because some tools have limitations of distinct events, for example Firebase (500 events). Also it becomes easier to analyse data without having to do too much aggregation or pivoting or data segregation.

Our tracking plan and the choice of tool go hand in hand while crafting requirements. You may need a Google Analytics type format where top to bottom classification of Category > Action > Label to log some events or a single name event for others. In some cases you would just need a heatmap tool to see a page’s strongest and weakest spots. Hence, the choice of tool depends on:

  • How do you intend to read the data
  • The level of complexity in scenarios and data collection
  • What kind of visualization capabilities you need
  • How scalable can your tracking plan be and which tools have usage limitations that prevent scaling
  • Which tools are free or paid or freemium
  • Whether you need Journey based engagement capabilities (CleverTap, Moengage) like sending push notifications, emailers, or SMS.

I have found the above 5 step approach to be really effective in setting up team level guidelines for implementing advanced product analytics. This has also helped in managing consistency in conventions across products and made teams sensitive towards digging deeper in order to ensure product success while keeping a check on complexities.

--

--

Mehul Mathur

Principal Product Manager at Paytm | Ex-Vedantu | Ex-CarDekho - SEA | MBA from MDI, Gurgaon