Using the tools of Design to define & quantify customer value

Matthew Godfrey
Ingeniously Simple
Published in
9 min readSep 29, 2021

Everyone wants their product to succeed in the market. For their customers to be able to achieve their outcomes as a result of the products and services we design, build and deliver. As product folk, not only do we want to make sure the fruits of our efforts come to bear (customers use and value the features we build) but that they also perform commercially. In other words, we deliver on the outcomes that matter to both the business and our customers.

So when we talk about delivering value — everyone’s favourite topic — what do we mean? And how does that (delivering value) translate into customer’s success?

One simple way to look at value is through the lens of Jobs to be Done. What is someone trying to achieve, personally or professionally, as a result of your product and service? Their underlying motivation for purchasing a product will be driven by this more aspirational goal (the JTBD), where the features and functionality we provide should be a means to them achieving that end.

Delivering on a promise

In a “traditional” perpetual licensing model, where there is a one-off transaction, the onus is on the success of companies Go-to-market (GTM) capabilities and customers’ early trial or proof of concept (POC) experience. Beyond the sale, however, there is a lesser need for the scrutiny and rigour necessary to evidence whether customers are going on to adopt and eventually succeed with the product.

Conversely, in a SaaS licensing model, there is no such thing as a one-time purchase. Your customer-vendor relationship shifts from a transaction, to an ongoing relationship. As such, the onus shifts beyond that initial GTM approach to one of delivering value, frequently and continuously. Given the nature of subscriptions, customers inherently have more flexibility and as such, there is a greater need to account for whether they are able to succeed with the product.

Your job, as a SaaS vendor is to shift beyond that one-off, transactional mindset and ensure customers are routinely succeeding with the product; delivering lifetime value. This goes beyond just understanding whether folks are using the product or not (although that’s a great place to start), to being able to measure and account for whether they are actually adopting and succeeding; realising the value behind their initial purchase.

A lack of realisation (being different from simply counting usage) poses the biggest threat to customers churning and switching to an alternative. Remember, while SaaS makes it easier to acquire new customers (lower friction, lower upfront costs, lower commitment) it also makes it easier for customers to consider the alternatives if they do not feel like they are getting the value they were promised and sold.

Pretty simple right, if you don’t feel like you’re getting what you were sold, or the value expected for the price you paid, you’ll start to look elsewhere.

So, how can the tools of Design help teams better define, quantify and measure success?

A framework for quantifying success

Product teams can use the elements of the research and design toolkit to take a research-based, experience-led and analytical approach to define and measure success; applicable to both new products or those that are already in the market.

Since there is a direct, causal relationship between a customer’s definition of value and the measures by which we account for their success, the first and most important job for product teams is to define value, in the context of their product.

Behavioural Insights Framework

The premise is that the components of this framework, when used together, allow teams to move from understanding the goals (JTBD) of their target customers (the value they seek) to measuring the key behaviours that serve as the leading indicators of their success (the value they realise).

Using the above illustration as a visual reference, I’ll briefly summarise each step of the approach to move from an agreed definition of value to using product elementary to account for their success.

(1) Personas

Personas help us empathise with and understand the context, goals, behaviours and resulting needs of our target (or primary) audience. They represent the who behind our product decisions and the focus of our value propositions. Personas are defined on the basis of the behavioural “differences that make a difference” (citing Jeff Gothelf on personas) and serve as a foundational artefact for teams in shaping products and experiences.

(2) Jobs-to-be-Done (JTBD)

Jobs-to-be-Done (JTBD) defines what should be part of the core proposition of our products. Jobs represent the motivations and goals of our target audience (i.e. what they were trying to achieve when they decided to purchase our product). Jobs are also foundational, in so much as they are instrumental in defining what value looks like. This combination of personas and jobs informs what the product should do, for whom and in what context, in order for them to succeed.

(3) Journey Mapping

User journeys help us chart the experiences that users go through as they interact with and consume our products. They help us understand user behaviour to better determine how someone might typically interact with an experience, in a given scenario. They help teams to decompose their goals (JTBD) into more granular models of tasks and flows. These serve as the architecture or blueprint for defining the steps involved in onboarding customers (Value Experience) and ensuring they can regularly extract value (Value Adoption).

(4) Behavioural Telemetry

Telemetry serves to help teams quantify and analyse when and how often users are interacting with our products. Whilst we can use qualitative methods (e.g. interviews) to understand why users do what they do, quantitative analysis enables us to determine what they actually do, when and how often. Knowing what we do about the target persona (1), what they are trying to get done (2) and how that maps to the flows and interactions in our products (3) teams can define what constitutes “successful usage”; inclusive of their first Aha! Moment, the average time to value (TTV) and ongoing engagement (adoption).

(5) Product Experience

I’ll reserve judgement in this article on what are the right measures/tools for tracking experience and satisfaction, but needless to say, some measurement of the experience is a necessary and complementary part of the overall picture of success.

  • Are customers doing/able to do as intended with the product?
  • Do they have a good experience in the process?

As we know, having a great experience when consuming a product or service is a big driver in our decision to renew. Functionally, it might do what someone expects, but experientially, if they don’t want or like using it (or have reservations about their team using it) then there is an increasingly higher probability they won’t renew. Measuring the experience, alongside successful usage, provides something of an early warning signal to product teams.

(6) Success Metric(s)

Lastly, and building on top of our journey-based behavioural telemetry (4), are the high-level measure(s) of success that indicate the overall health of your product. These measures seek to capture how often and to what degree customers are progressing and succeeding (based on your definition) with the product through those key phases of onboarding and adoption. In his great book Product-Led Onboarding, Ramli John refers to these as “Product Adoption Indicators” (PAIs). Others have referred to this as your product’s “North Star Metric”; a guiding metric that defines and captures the core value of your offering.

In Facebook’s case, their North Star Metric was:

“Get any individual to 7 friends in 10 days. That was our keystone… that helped ramp this product to a billion users.” — Chamath Palihapitiya (Former VP of User Growth, Facebook)

Conflating usage and success

Having illustrated this as an ideal framework and approach. I wanted to talk briefly about the alternatives. One of the things I’ve experienced over time is the conflation of success and usage (or generalised usage).

Measuring usage, without a definition of what constitutes value for a given customer segment, can lead to false-positives. The risk is we assume any type of usage — across what might be a broad product or service — will translate into a happy, successful customer. Yes, the fact that folks are using the product at all is certainly better than the alternative, but it’s risky to conflate this (a coarse and generalised definition of usage) with success.

On the face of it, general usage might look positive, but the big caveat and word of caution is that it may only be revealing part of the story:

  • Are customers frequenting part of the application and getting stuck? 🤷‍♀️
  • Is it taking far longer than average for them to onboard/get started? 🤷
  • Are they “using” the product but failing to achieve that Aha! Moment? 🤷‍♂️
  • Are they “using” the product but having a terrible experience? 👎

The goal, in my opinion, should be to use a framework (similar to that presented above) to get to a fine-grained and more specific definition of “successful usage”. That doesn’t mean measuring all the things. Quite the opposite. It means really narrowing in, as a team, on who the product is for, what they are trying to do and what that experience looks (and feels) like.

Let’s assume for a moment you have a multi-faceted product that might serve a few different customer segments, each with a different JTBD. Longer-term and in an ideal situation you might look to track the experiences of each of these segments independently; each with a slightly different onboarding experience and definitions of success. But, being pragmatic and in the spirit of getting started with this approach, you might begin by just focusing on the Core JTBD of your primary persona.

I’d argue it’s better to measure fewer things and focus the team’s efforts around the core experience, than instrumenting every aspect of the products and then not being able to see the wood through the trees.

The other issue we experience with generalised usage (particularly in the absence of product journeys) is the lack of specificity. Without understanding where within the journey users are experiencing friction, it’s hard to turn into anything actionable. We know people aren’t using the product, we know people are churning, but we don’t know where the problem is and why it’s a problem. Journeys help us analyse these onboarding and adoption journeys (relative to customers’ goals), to identify where we might need to intervene or course correct.

Taking it a step further, the powerful combination of journeys and telemetry means the team can begin to explore and utilise a combination of automated and design-led approaches to address potential issues. Softer, behavioural nudges (emails, product notifications etc.) might be enough in some cases to convince folks to move through the product as intended. In other situations, teams might reveal significant pans/blockers (technical, experiential or otherwise) that might require some heavier lifting.

Behavioural Insights Framework + Interventions

The point here is to provide insight. Insight that is specific, actionable and timely. We can’t just assume customers are succeeding with our products and we have to be specific to know where and how to focus our efforts. If the first time we’ve heard customers aren’t using the product, or are having a bad experience in the process, is at renewal time then we’re too late!

Using a design-led approach to define and quantify “value” provides us with the leading behavioural indicators necessary to drive success and mitigate churn.

Summary

So, whether you use more of a design-led approach, or some alternative model to define and analyse value, I wanted to leave you with a few overarching, guiding principles:

  1. Avoid conflating a broad definition of usage with success.
  2. Identify what “value” looks like through the lens of your customer
  3. Map the product experiences that constitute value (experience to adoption)
  4. Track the key leading behavioural indicators that signal stagnation/success
  5. Identify your pain points and deploy specific tactics to nudge/intervene

For further reading and much of the inspiration behind this article, I’d recommend the following resources:

--

--