Data, Data Everywhere, But Not an Insight to Read: My Musings on CS Metrics.

Katie West
8 min readAug 25, 2023

--

Previously: I covered Tools. Tools. OMG Tools. Let’s get some tools. where I shared my thoughts on where to invest and how to build out your CS tech stack early on.

TLDR: We are a data engineering company, so of course there’s a blog on metrics. Figure out what matters, and measure that. You should only have 2 to 3 numbers you’re trying to optimize. Be specific and thoughtful, and then drive everything towards that number.

  • GDR/NDR: At the highest level, GDR is what matters for us. That’s it. I don’t own expansions (that still is with sales) so what I care about is making sure a customer is set up, getting value, and will renew with us because we are making their life easier and making them more successful. CS teams should be aligned to one or both of those metrics.
  • Onboarding: If you get this right, the rest of the contract term will be 10X easier to manage. Capitalize on excitement from signing the deal. It’s much harder to rip out your solution if you get it set up, and you’ll prevent a headache for the team later on. Invest and spend money getting this right. Go be on site. Spend the extra time. Extend the onboarding period. Do it for them (if you can). Just make sure the customer gets implemented and is using your product.
  • Usage: For us, our contracts are centered around volume of data, so usage matters. Measure the metric that is tied to your commercial model. If it’s hard to measure, revisit your pricing strategy because it’s clearly not resonating with customers and they’re not seeing value. This one is very solvable and is usually a strategy decision and less about the product.
  • Tickets: We aren’t a support team, but we use ticketing to help support our customers (it’s nuanced, but it matters). We look at tickets to measure efficiency and engagement. Figure out why and how you’re engaging with customers, and be selective about what to measure. For us, time to close a ticket didn’t actually matter because we have complex engineering issues that may be delayed because of a customer sprint cycle. I’m not going to penalize a TAM on something like that. Make sure your metrics are tied to the behavior you actually want to drive.
  • Product Data: Talk with your engineering or product team early on and make sure that they’re able to store data or measure engagement metrics that you may want later on. Even if it’s not in a dashboard on day one, it’s much easier to pull the data later with a BI tool if it’s already accessible. Think about things like usage, total users, log ins, activity logs, etc. and start collecting it now.

You know the drill, this is the good part:

As the team began to take shape in 2022, we added more CSMs and got better global coverage for our Technical Account Manager team. We could spend more time working one on one with customers, digging deeper into their business use cases, holding real QBRs, engaging with other cross-functional stakeholders within the client team.

We needed to get more serious about measuring what mattered to us to help drive GDR. Here’s what we focused on:

Onboarding: Time to Live

Our business involves customers getting our SDK instrumented and sending data. Our commercial model is centered on the volume of data that a customer sends through our platform, so it’s vital that customers get live with us as soon as possible to start realizing value. We measure this as percent of customers “live” with in 90 days. We have a slightly different goal for Enterprise customers, but the same concept applies.

The challenging part of this is that customers come to us in all sorts of states. Some customers are advanced and can be set up with their first use case in a matter of days, some customers have a larger vision of a CDP but don’t know where to start, and some customers just upgraded because they hit our limit on our free tier.

It’s incredibly difficult to onboard customers in a systematic way because their end states vary so much, skill levels vary, and RudderStack is a platform with variability in how you can use it.

To that end, we came up with one uniform definition of “live” as “Sending any data from a production instance.” It’s not perfect, but it was too difficult to track something like “realized first use case” because that’s not automated and varied so much.

We had some challenges with even measuring “live” status. First, our product doesn’t automatically recognize when you are sending from a production source — we have to either ask from a customer, or look at how they’ve labeled things internally. Second, just because they’re “live” doesn’t mean they’ve realized a business use case, as alluded to above.

In the future, we may change this definition to a certain volume of data, so they’re live if they’re sending >X amount of data. Intuitively, there seems to be a certain amount of data that’s “enough” that they seem adopted, but we’re still not really sure where that cutoff is.

Another option is to have a % of contract (sending 5%, 10%, 20% of contract), but that is predicated on our ability to estimate the correct amount of volume for a new customer. It’s not an easy problem. And again, these are just internal definitions we use to make sure a customer is active and well on their way to adoption.

Onboarding is the first metric that I would recommend all CS teams pay attention to.

  • How long does it take customers to get up and running?
  • How much time and resources are you spending to help them?
  • When do they first realize value or solve a pain point?
  • When is their usage “sufficient” to indicate they’re active users?

You want to capitalize on the excitement and momentum generated by the sales process, or you risk becoming another tool that sits on burner and never gets prioritized. That sets you up for a rescue mission later on ahead of a renewal — something everyone hates.

Onboarding is the most important thing you can do to set yourself up for retained and stable revenue. Even if you fail in QBRs or engagement later on, if you know they’re using your product well at the beginning, you have a higher chance of becoming embedded with a customer and having them build business use cases and realize value.

Sometimes they actually want you to be a “set it and forget it tool”, which works out great. Just make sure you nail the “Set it” part.

We were only able to start measuring this once we had Gainsight. We had a manual switch in there where the CSMs or TAMs could go in and toggle a customer from “onboarding” to ‘live’ and then we automatically calculated the number of days from “live” (based on the timestamp for when they made the change) versus the contract close date we imported from Salesforce.

Again, this would have been impossible to track before, so whenever you have a metric, make sure you have an adequate tool in place to monitor.

Usage

As I mentioned, our commercial model is based on data volume. We have annual contracts that have monthly or annual data volume allotments.

Given that, it was pretty straightforward for us to measure and look at usage versus contracted amounts — or event volume utilization. This was our primary metric to understand who was at risk of potential churn, or at risk of contracting.

One of the challenges with this, as a new startup, is that it’s very difficult for customers to estimate their volume needs at the beginning, especially if they’ve never used a competitive tool and aren’t aware of everything they want to set up.

In the early days, I would go into our internal Snowflake dashboard (pulling data directly from our product) on a monthly basis and go one by one through all of my customers and look at their current volume against the contracted amount in salesforce.

It was an extremely tedious process where I had Salesforce open on one monitor and Snowflake on the other, but that level of oversight was needed to make sure customers were tracking.

In thinking about what metrics to track, early on, you need one metric that is clearly tied to your pricing model. This will give you not only insight around potential churn or contraction risks, but will also create a much clearer picture of how well your pricing and packaging strategy is working.

Tickets

Ticket data was the third area we measured. Our goal was to be responsive to customers and also understand how we were operating to support customers.

We changed tools a few times to get this right, but initially we focused on the number of tickets per Technical Account Managers. There were a lot of nuances in terms of what that meant, so it was my job as the team lead to interpret those numbers correctly.

Some people initially had a different load balance with highly needy customers versus low touch customers. We also started digging into how long tickets were open, how many messages went back and forth, and realized there were a lot of scenarios and ways in which we were measuring things that weren’t indicative of the quality of support we were giving.

The real insight came when we finally admitted to ourselves that we were measuring the team like a support team, when they were acting like a solution engineering team.

The metrics didn’t make any sense for us because we weren’t cranking through easy to answer tickets.

We had complex, non-repetitive questions coming from customers, often which involved us directly debugging code, writing transformations for customers, or working with our engineering team on a resolution.

This was also very hard to communicate internally because there was a perception that this team was “support’ — surely we could use some sort of knowledge repository or chat bot to help mitigate workload. But in reality, we had almost 100% unique questions from customers.

I needed a way to measure the team’s productivity. We settled on the number of closed tickets and the number of tickets escalated to engineering. The goal is to see how many tickets each TAM is getting, and how many they’re able to answer themselves. We review the numbers with a different lens when evaluating performance.

These are now used to identify any areas where someone may be getting bogged down, or struggling with a complex issue or customer. It also tells me a lot about someone’s technical aptitude if their numbers are significantly above or below their peers. It’s a way for us to flag when we need to bring in reinforcements, rather than viewed as a purely performance based, quota-style metric to be met.

Summary

The big insights I had on metrics are to make sure that you don’t measure everything, and that it’s really driving the behavior you want to see from people. We pivoted on our view of tickets multiple times over a period of a year before we settled on a view that people were comfortable with.

We also made sure that our metrics were visible and easily accessible to everyone on the team. We’ve since developed a much more robust way of understanding customer performance, but in the early days it was important to keep it simple and tie it to behaviors.

--

--

Katie West

Customer Success Lead. I write about how to build a CS team from scratch and how to actually use data to manage your growth and team.