How to Set Up Your Product Analytics

Eric H. Kim
Practice Product
Published in
8 min readJun 11, 2019

Understanding who your users are and how they engage with your product is critical to building a market-leading product. To gain these insights, you may want to leverage third-party product analytics (a.k.a., product intelligence) tools to collect and analyze data to inform decisions. This article covers some best practices for planning, implementing, and improving your product analytics.

Create an Intuitive Experience

Start by understanding the needs, use cases, and key metrics of your organization, as well as the overall insight infrastructure being built, so you can plan effectively. Lock down expectations before you start. You already failed if a stakeholder (e.g., executive) expects a capability that won’t be delivered (e.g., financial reporting requiring a BI tool).

Once everyone is on the same page of what Product Analytics is and is not, focus on creating an immaculate end user experience. In my experience, an analytics user’s confidence level is either 99% or 0%. So attention to detail and setting expectations on capabilities are critical.

You want to make sure that the experience of querying, analyzing, and visualizing data is easy and understandable. Intimately understand the UI/UX of the analytics tool you are implementing so you can make the right decisions when planning. An excellent experience has:

  • Meaningful metrics: It should be clear what exactly is being tracked, why, and who performed an action (with context)
  • Trustworthy data: All data must be accurate and precise — achieved through iterative instrumentation and validation (using data from other sources)
  • No noise: Show only meaningful, accurate metrics and actively eliminate cruft — misleading data, test data, duplicate events and properties, and deprecated items
  • Meticulous and authoritative data governance: This is an on-going effort: 1) keep documentation clear and up-to-date, 2) name events and properties in a natural, self-evident manner, and 3) use consistent patterns and taxonomies

Be Methodical

The key to effective data governance is planning a simple and scalable information architecture. Design decisions should be driven by the nature of your business, product, analytics users, and use cases. Product Analytics tools are designed to offer the best experience around analyzing events and the users that generated them. You don’t want to find yourself clicking 12 buttons or firing up Excel to see basic info. Here are some tips to make sure it’s “not too tight, not too loose”:

  • Don’t over-abstract, focus on what’s natural: When deciding whether to use multiple events or a single event with multiple properties, consider the importance of the event in your experience (e.g., purchase vs. page visit) and frequency of querying. The more important and queried, the more likely you will want to treat as a separate event so you can have more segmentation control during analysis and reporting. For example, if you are managing a messaging app, you might not want to have a single event for the core action (“user sent item” (event property: item_type: message | emoji | meme | sticker | money). It might make more sense to have multiple, separate events so you don’t have to parse millions/billions of the same event.
  • Don’t pre-optimize: Pre-optimizations occur when you make assumptions about what features will be subsumed into the core experience. Focus on designing around the most important and frequent events in your current experience (not experimental features or where you see the product in the future). It’s also tempting to track everything to be safe, but doing so typically leads to the costs associated with complexity and debt.

Naming of event names, property names, and property values should be as natural, clear, and consistently structured as possible:

  • Copy should read like a story: Base naming on a user and their actions, not the characteristics of the product. Example of good — centered around a user and action: “visitor selected desired location as new york”. Example of bad — centered around the product’s UI: “new york selected in dropdown menu”
  • Event names should be consistently structured: [user] + [action performed] + [target of action] + [context] OR [user] + [state changed] + [context]. The action performed is a transitive verb and the target of action is the direct object (e.g., subscriber played song]. A state change is an intransitive verb (e.g., trial user deactivated). Example of good: “prospect purchased subscription on upsell interstitial 3”. Example of bad: “page visit”
  • Event names should only be as long as to be clear: Example of good: “candidate registered”. Example of bad: “candidate registration successful”
  • Be intentional with every piece of naming: 1) Clear user: Partner with engineering leadership to sync user lifecycle states and personas (e.g., visitor, prospect, subscriber, rider, driver, shopper, clinician) and always specify in your event name “driver did x” vs. “rider did y”. 2) Action performed: Should be a descriptive, transitive (or intransitive) verb in the past tense. For example, “submitted” may be more appropriate than “picked.” Also, your action should not be vaguely suggestive (e.g., use “selected” or “pressed” not “clicked” or “tapped” because the latter two may suggest a mouse click on desktop web or tap on a mobile device).
  • Be precise with when an event is fired: To me, the following events are not the same: A) “tax filler selected submit button” (implies that a user successfully hit the submit button — that’s all) vs. B) “tax filler submitted income tax return” (implies that a user successfully submitted and that there were no technical issues such as API or network failures)

Give tracked events rich context by including user data. Define (and iterate) taxonomies describing meaningful user qualities. (Don’t forget to talk to Legal about PII issues.) Here are examples:

  • Goals: dater_goals; desired_salary
  • Preferences: shopper_primary_research_device: ipad
  • States/status: user_type: driver, rider; user_activation_status: registered, completed_profile, added_seventh_friend, sent_first_message
  • Demographics/psychographics: age; gender; race_and_ethnicity; favorite_youtube_channel
  • Historic engagement: cumulative_connection_requests_received; first_invite_sent_date

Remember, this is a collaborative process. Get feedback and buy-in early from engineers, analysts, marketers, customer support, and executives so everyone is speaking the same language. Vigilantly update, educate, and evangelize others so that your effort isn’t academic.

Document

To create canonical documentation, I typically:

  • Get everyone to buy into a process before starting
  • Create and maintain a single Google spreadsheet which include key details about events, such as name, trigger, status, associated properties, and possible property values (including the data structure)
  • (optional) If necessary, I use visual documentation (sometimes coupled with an App Map) to show events and how they are triggered. Example screenshot:

However you approach, keep all info up-to-date and add an “updated” date at the top of the document.

Implement

Process

Incorporate analytics into your product process at every step, including adding to the scope of a user story. You can add a section below Acceptance Criteria that defines events (and/or creating individual tasks per event on the ticket).

Priority and implementation status is tracked in the Google spreadsheet (for product), whereas the implementation request is documented in a story (for dev). I collaborate with partners and stakeholders to prioritize key metrics (e..g, scale of High/Medium/Low) then encourage a quick pass through the entire user experience to implement High priority events. Then the team does another rapid iteration through the experience to cover Medium priorities. A common mistake is to get bogged down in tracking everything in a single part of the experience. Try to stay disciplined with releasing small batches of the highest priority events, then do fast follows. Your analytics is only as good as your analytics coverage and a key goal is to eliminate blind spots.

If you don’t include analytics coverage with feature releases, you are accumulating product debt. Yes, there are exceptions, such as defensive releases such as bug fixes. Product work begins, not ends, when a feature is released. This parallels the development philosophy that tech debt accumulates whenever a feature is released without test coverage. Just like coding, a common mistake is to treat analytics as “one and done.” Quality user insights is the byproduct of an ongoing, methodical process, not heroic, one-time efforts.

Coding

How implementation is batched should be driven by the dev. There is a case for tracking a few events with each feature release or creating a separate story to knock out many events at once. The decision seems to be dictated by minimizing context switching (e.g., dev is already touching a specific area of the code base).

To support devs, read through a Product Analytic tool’s documentation. It’ll help you understand the idiosyncrasies of the SDK and help you troubleshoot when there are discrepancies between expected and actual behavior. For example, a tool may claim to handle casing or spaces in names or property values, but actually doesn’t. A good practice is to invest in a class that enforces consistent implementation (e.g., lower casing all copy in-code).

It might be tempting to track events on the back-end (for various reasons) but many tools are designed to track user events on the front-end so that user-related metadata can be automatically passed. Typically, you will want to track user interaction with your product on the front-end and transactions / state changes on the backend.

Tracking the absolute values of user actions will enable other desired metric types. Example:

  • Absolute Values: Track unique Registrants (#) and Visitors (#).
  • Relative: This allows the user to see a calculated metric such as Registration Rate (%) = Registrants / Visitors.
  • Financial: It also enables the calculation of Cost Per Registration = Total Spend ($) / Registrants

It’s better to calculate derivative metrics in the Analysis Layer, not in-code, for flexibility.

Testing

Process discipline frequently breaks down when it comes to QA. Test analytics for accuracy and precision pre-release in a dev environment, then test again post-release in production. Missing a step can allow data quality issues to sneak through and cause confidence issues downstream (remember that confidence is 99% or 0%).

The dev should test thoroughly prior to acceptance testing, and the product owner should test again pre- and post-release. If you really want to go the extra mile, create a reminder to audit the data again after data has accumulated to sanity test for accuracy. You should triangulate accuracy by comparing to other data sources (e.g., production DB / BI tools, qualitative data, gut check).

Iterate

Continuously refining instrumentation is the most important (and neglected) habit. Data quality scales so it’s important to treat Product Analytics as an ongoing investment. Over time, your team will discover what metric variations are most meaningful and how to tweak for accuracy and precision.

Start simple, then get fancy. Going back to the example of tracking Registrations:

  • Start by tracking unique Registrations
  • Iterate by omitting spam
  • Iterate by omitting dev testing activity
  • Iterate by omitting accidental duplicates created by users
  • Iterate by…

Evangelize!

If you want to encourage a more data-driven culture, hustle to get people to actually use the tool. Make the experience compelling and easy by setting up valuable dashboards / charts / analyses and follow up with individuals across your organization. I typically create space for curated key metrics (“official” organizational goals and metrics) but also leave plenty of space for teams and individuals to create their own analyses.

In addition to group tutorials, I find it worthwhile to have many one-on-one sessions to get people using the tool. In advance of meeting, I ask the person to compile a list of learning objectives so that we cover real use cases.

Questions? Comments? Contact me if your team could use help improving your product analytics practice.

--

--

Eric H. Kim
Practice Product

Helping people become better product managers and leaders. Currently a head of product. Formerly a startup executive, product manager, and founder.