Developing Meaningful Automated Metrics

Empowering Development of Strong Products

Robert Brodell
Capital One Tech
8 min readMay 1, 2019

--

This post has been adapted from an earlier version which appeared on ProductCoalition.com

It took about thirty minutes and the entire development team to get our data together. We created seven slides of metrics pulled from six different systems complete with graphs and footnotes to avoid confusion on what the numbers represent. None of us like these fire drills to answer leadership’s questions, but we usually benefit from the insights our metrics provide. Unfortunately, that was not the case today. Three minutes into presenting our executive derailed the meeting by asking: “We have metrics, right?” After a stunned silence, he spoke up again: “I see the numbers and graphs, but what are we measuring and why?” As silence took hold once more as I realized that our metrics were meaningless.

That was a low moment for the team. We had worked hard to pull and assemble data for our executive in hopes of identifying insights that tell us something meaningful about our product. In hindsight, we failed to analyze the data we produced while learning the hard way that data and metrics are not synonymous. Taken together, data and analysis constitute a meaningful metric.

Multiple failure points threaten the development of meaningful metrics. Sometimes our data overlooks relevant product and business information. Other times, we fail to generate data in a consistent manner which undermines our ability to provide quality trending analysis. Most often, overusing caveats and visualization obscures trending and analysis while calling the relevance of our data into question. Each scenario results in a well-intended metric becoming cumbersome to generate and incomprehensible outside the core team. Such bespoke metrics often get lost in nuance, lacking enough actionable information to justify their existence.

By contrast, meaningful metrics communicate relevant information in simple terms. They inspire action by telling us where our products succeed and where they fall short. Automating metric generation powers meaningful metrics by ensuring consistency in our underlying data. Automation also protects developers from scrambling to manually produce data.

We probably all prefer automating the generation of meaningful metrics over manually producing meaningless ones. Getting to our ideal state involves three steps:

  1. Identifying relevant data
  2. Simplifying trending and analysis
  3. Automating metric generation

We examine each step in depth through the rest of this article.

Identifying Relevant Data

Meaningful metrics start with meaningful data. Data focused on the user’s interactions with our products has a basis in our business’ real-world performance which makes this data inherently meaningful. Frameworks like Google’s HEART help us assess the relevance of different types of meaningful data. “HEART” stands for five categories of metrics Google teams consistently use:

  1. Happiness
  2. Engagement
  3. Adoption
  4. Retention
  5. Task Success

Happiness data tells us about user attitudes. If customer satisfaction is vital to business performance, happiness metrics are useful. Engagement data illustrates user behavior and level of participation with a product. Engagement data should inform metrics when increased product use drives success. Adoption data quantifies baseline user growth and helps us create metrics for products that need to expand market share. Retention data enumerates users who continue to use or abandon a product over a set period. Retention metrics are beneficial when gauging baseline use of a product over time.

Task success data quantifies product performance in terms of efficiency and effectiveness. Development teams working to enhance their product’s performance find this data more relatable than data in the other four categories. Consequently, most teams are swimming in task success data.

Meaningful data focuses on user’s interactions with our products allowing us to easily apply metrics to product strategy.

Time-on-task may be the most common task success datapoint. It answers the question: where do users spend the most time on our platforms and applications? The average amount of time users spend doing a single action, like setting up an account, has minimal value on its own. But it plays a key role in enhancing other data. For example, time-on-task augments engagement data by quantifying user interactions with tasks that we know drive additional involvement with a product. Similar synergy exists between other task success data points and each remaining HEART category.

When paired with another HEART metric, task success data aligns user behavior and performance. This business-relevant data is meaningful. It helps us understand how performance drives the bottom line and empowers us to prioritize product improvements that increase viability. Metrics that pair task success with another category also demonstrate the value of our development work by quantifying the performance of our products in a way that resonates with leadership.

The HEART framework helps identify meaningful data to feed metrics. However, HEART categories do not carry equal importance. In fact, some categories may have no bearing on individual products. Using the framework as a guardrail, not a checklist, helps us consider all available data.

Simplifying Trending and Analysis

Meaningful metrics distill relevant business data into understandable statements that drive action. A second framework can act as a guardrail to ensure we produce understandable and actionable metrics. Remember the fill-in-the-blank story books we played with as children? They included simple statements like:

(name 1) and (name 2) went to (verb) in the park.

Everyone understands a simple statement like this one. The same logic applies to metrics. For example, an adoption/task success metric story provides data, trending, and analysis needed to inform actionable steps in just one sentence:

(#) customers onboarded to the product last week, which is (up / down) (%) since our last release. They spent an average of (#) minutes onboarding which is (up / down) (%) since our last release. This means customers (may / may not like) our new (feature name).

Unfortunately, many metrics overuse caveats and visualization which obscure trending and analysis past the point of intuitive understanding. The fill-in-the-blank story framework ensures our metrics speak for themselves. These simple metrics stories tighten feedback loops by telling us good, or bad, news so appropriate action can follow. Using the story above we can create actionable good news:

281 customers onboarded to the product last week which is up 12% since our last release. They spent an average of 14 minutes onboarding which is down 6% since our last release. This means customers may like our new onboarding user interface (UI) features.

This metric validates a recently released onboarding UI. It empowers the development team to advocate for their UI’s design while simultaneously ensuring the developers get credit with leadership. The metric empowers leadership to make decisions like expanding marketing for the new UI.

Using the story above we can also create actionable bad news:

281 customers onboarded to the product last week which is down 16% since our last release. They spent an average of 35 minutes onboarding which is up 12% since our last release. This means customers may dislike our new onboardingUI features.

The team may elect to rework onboarding UI after reviewing this metric. Importantly, if they develop and release an improved UI quickly, they could turn the situation around and resume higher levels of adoption. Our actionable metric identifies the need for this critical development effort.

In each example our product onboards 281 customers after releasing new UI features. That data alone does not drive action to improve adoption. Trending on percent change since prior release paired with time-on-task data enables us to identify actions to improve adoption. We have pulled a lot of data together to generate this metric, but the simple story structure makes the metric understandable. As an added benefit, using the same story structure to report this metric on a regular basis ensures everyone gets used to the structure and reinforces the story’s simplicity.

Automating Metric Generation

A suite of trackable metrics emerges as we create multiple metrics stories with manually populated data, trending, and analysis. This poses a tradeoff. Do we allocate resources to automating metrics generation, or immediately focus on addressing issues the manually generated metrics suite identifies? Many developers choose to focus on addressing issues because issue resolution has an immediate impact on product performance. This decision feels right because our metrics are supposed to drive action. Counterintuitively, it actually hurts the product.

In the last section, we examined an adoption metric that relies on raw data and percent change. Manually gathering data and calculating percentages on a weekly basis presents some risk. What happens when the developer generating this data goes on vacation and their backup pulls data from a different source? As scenarios like this multiply, our inability to consistently generate data undermine quality trending and analysis saddling us with meaningless metrics. Automation can help us avoid this outcome.

Not all metrics are automated equally. We face engineering challenges when pulling data from different systems into one place. Additionally, individual audiences often have divergent criteria for viewing metrics. Development teams require deep insights from metrics and prefer dashboards that facilitate interacting with underlying data for additional analysis. Conversely, leaders seek high-level insights from product metrics and prefer viewing them alongside metrics from other functions they manage.

In an ideal world, we would generate metrics in each audience’s desired view. But automating the same metric to appear in two different places creates an engineering burden. There are two potential workarounds to this conundrum:

  1. Influence leadership into accepting metrics presentation from the automated dashboard
  2. Automate metrics onto an interactive dashboard for the team that allows copying and pasting into presentations for leadership

Each approach insulates development teams and product managers from manually generating metrics in a new format.

The best dashboards provide a simple view of product metrics while also serving as a gateway to deeper layers of data.

Automation empowers us to consistently distill data for our metrics. It also increases our lead time to analyze metrics by eliminating time spent manually generating them. Successful automation ensures ease of use by presenting metrics to all audiences in their accepted formats. The benefits from consistency and ease of use enable us to trust metrics and maximize our time spent interacting with them.

Developing with Meaningful Metrics

Connecting business relevant data, intuitive trends, and straightforward analysis with automation empowers us to identify issues and opportunities and quickly adjust strategy. Without these meaningful metrics, we risk issue inflation and missed opportunities impeding our success. But the story does not end here. Meaningful metrics empower developers on two levels:

  1. The inherent business value of meaningful metrics brings clout to development priorities
  2. Intuitive trending and analysis backed by automation of meaningful metrics enables developers to leverage them at any point

In short, meaningful metrics boost our ability to develop high-performing products. Developing meaningful metrics takes time and effort. Teams must identify relevant data, simplify trending and analysis, and automate metric generation. Each step presents unique challenges and carries equal importance. Despite these challenges, the benefits of these efforts compound over time as the automated generation of meaningful metrics provides on-demand product insights and empowers teams to develop strong products.

DISCLOSURE STATEMENT: © 2019 Capital One. Opinions are those of the individual author. Unless noted otherwise in this post, Capital One is not affiliated with, nor endorsed by, any of the companies mentioned. All trademarks and other intellectual property used or displayed are property of their respective owners.

--

--

Robert Brodell
Capital One Tech

I'm a product manager & freelance writer. My writing explores best practices, product mindset, and complex product challenges. RobertBrodell.com @RKBrodell