Where the Magic Happens: A Tale of the Customer360 Table

Katie West
13 min readAug 26, 2023

--

Previously: I covered how we managed renewals and built out a predictive, data driven process in No crystal ball here! Forecast renewals with data.

TLDR: JK there isn’t a summary this time. Ha! You think I’m going to just give you the goods without making you earn it? No way. You’re on my blog, so we’re doing it my way. Start appreciating detail. I mean seriously, this is already a summary of what we did. I already spent a long time figuring this out, doing the actual work, and then writing about it so you can learn. I’m not going to do even more work so you can tell everyone you’re SO busy that you don’t have time to read things and only scan bullets so you can tweet about it. Do the work. Get comfy, and get reading.

Now, for the meat and potatoes:

The beauty of working for a data engineering company is that I have incredible tools and assets at my disposal. I also have actual engineers on my team who can build things, and they enjoy it!

RudderStack, as a product, is a warehouse-native Customer Data Platform (CDP). What that means is we can take customer behavior from your website (what pages are viewed, buttons clicked, etc.), send that information directly to your warehouse, and send it to downstream destinations like Facebook, Mixpanel, or Hubspot.

We also have an ETL and a rETL product — basically pipelines that can take data from cloud-based tools like ZenDesk or Salesforce and pipe that directly into your warehouse, and do the reverse, moving it out of your warehouse into any of your destinations like Facebook or Amplitude. This means we can get all of your data on a customer into a single location and keep it updated.

We recently launched a new product called Profiles — this is an amazingly powerful tool that can help you create an Identify table and a Customer360 table within your warehouse.

What the heck does that mean? Well you typically have customer data stored all over the place in multiple systems and it’s almost impossible to string all of that data together into one centralized view.

Ok that’s cool, but what problem does this solve?

In my last post, I described downloading a sheet from Salesforce, having my admin dashboard open, and also looking at a table in Snowflake to be able to assess renewals.

Profiles solves this EXACT problem — it gets all this data from all these different places and lets you get a single view of your customer.

Think about it. RudderStack is a B2B business and we have multiple touchpoints to string together:

  • We have customers visiting our website, checking out new articles and docs.
  • Many times, it’s multiple people working for the same company.
  • Those same customers are in Slack, sending us tickets.
  • We also have information on their paid account stored in our Salesforce instances
  • We also have health and sentiment data in Gainsight.
  • We also have all of our calls with customers in Gong.
  • We have product usage data all within our product database.

For me to understand what’s happening with an account, I have to

  1. Log into Salesforce and look at usage and their contract
  2. Log into Foqal and look at tickets
  3. Log in to the product admin dashboard and understand their technical set up
  4. Log into Snowflake/Tableau and review usage

I can’t even see things like how many users are on an account, when did they log in, how many Transformations do they have set up, and the list goes on and on.

There is no quick way to see who is a low adoption customer. I cannot quickly zero in on accounts that are at risk of churn. It all relies on me reviewing data and processing it myself.

That sounds awful, and I totally relate, but what’s the solution?

With Profiles, I can now combine all of that data into one single, unified table within Snowflake.

I know, you’re thinking “wow Katie, not a table! Who cares? Can’t I just make that myself in Google Sheet? You said it yourself that you can do most stuff in a sheet”. No, you can’t. If you could, I wouldn’t have created this entire blog.

You would have to export everything, figure out what company all the individual users rolled up to, sum relevant data, figure out which values should override which, and lots of other really complicated stuff that would straight up not be possible without someone doing a bunch of SQL stuff. It would be insanity.

We then use a tool called Superblocks (a simple BI tool like Tableau, which would also work) that sits on top of Snowflake to visualize the data in both a table format with nice color coding as well as building some really easy dashboards that I use to monitor our customers.

A few caveats to consider:

Defining the Features / Columns: I won’t lie, this did take several iterations of me working with Dave, our amazing TAM Manager, to get a finalized set of columns. I didn’t know what data was available, and Dave wasn’t really sure what I wanted to use to manage the team. We pulled a first version based on what we had been using in the account review manual process, and then I just said “hey, it’d be sick if we also had this” and he told me if that was possible.

But that’s also what is amazing about Profiles. Normally if you did this in SQL it would be a complete pain in the ass to update if you accidentally forgot a certain column. Now, Dave was able to quickly go in and pull another feature into the table in a matter of hours. It’s way easier, but expect some iterations as you figure out what data you really want to use.

Data Cleanliness: I’ll also caveat it to say our data is pretty clean because our Marketing and Rev Ops team have a similar set up to manage data — that’s something for you to think about as you set out on this journey. You may have to go back and clean things up, but now that you can see what our end state looks like you can try to be as thoughtful as possible when initially setting things up.

So…. What does this look like?

I’m hiding a lot of account-specific information, but this is what our table looks like:

You can see that this rolled up view of a customer starts to become a lot more compelling and insightful. I get a single view of how our customer is using the product, what our commercials are, and what it takes to support that customer. We didn’t initially have the Product Adoption metric — that was developed in the second iteration of the dashboard — but that has been the game changer.

Holy crap that is awesome, so how did you use it?

When I first got access to this data, I immediately started looking at things like number of tickets by ARR, number of Gong calls and minutes per ARR, and usage metrics.

I quickly realized we needed to operationalize and act on this data. We started looking more closely at days to go-live (one of our key CS metrics) and realized we had really poor adherence in Gainsight to mark customers as Live.

I started sending out weekly reports on late-onboarding customers and asking CSMs and TAMs why they weren’t live. I also published the report in our internal general Slack channel for everyone to see. Very quickly, those “live” dates started to appear in Gainsight (it’s calculated based on the timestamp for when they’re toggled to “live” minus the contract signature date from Salesforce). Amazing how shining a little light on things changes behavior!

We also were pulling in customer health from Gainsight, but knew that the metric was a poor indicator of how a customer was actually doing. Originally, we were calculating it with a 40% weighting on the contract utilization percentage, and 60% on the customer sentiment, which is manually entered by the CSMs and TAMs in Gainsight. Again, we had really poor adherence to this and it was quickly defaulting to a large number of yellow customers.

To get around this, we had a small in person working session with a few folks on the team to come up with our view of what Product Adoption and Sentiment scores SHOULD be.

Birth of the Product Adoption Score

For product adoption, we brainstormed a variety of metrics that would indicate someone is getting use out of RudderStack. The score includes product usage, but also shows their overall engagement with us, and the breadth of feature and product usage.

We wanted to know:

1) Are customers getting what they paid for?

2) Are customers using things that tend to be sticky within our platform?

3) Are customers actively engaged with us?

We came up with the following initial list (image below), and then ascribed a value 1 through 5, using different ranges for each metric. Our key attributes for the score include:

  • Contract Usage
  • Pipelines Enabled (yes/no)
  • Number of Tickets
  • Number of Sources
  • Profiles Enabled (yes/no)
  • Number of transformations
  • Destination Categories/Types (types of categories like CRM, Marketing, Product Analytics, etc.)
  • Number of Calls
  • Unique Log Ins
  • Number of Users in the Workspace
  • Number of Destinations
  • Tracking Plan Enabled (yes/no)

For example, if someone is below 40% contract utilization, that tends to be pretty risky for a churn or contraction, so we give them 0 points. If they’re between 40–60%, they get 2 points, 60–80% is 4 points, and 80–100% is 5 points. For the binary attributes, you either get a 0 or a 5.

It’s a bit arbitrary to start but it is based on our 2 years of experience in speaking with customers. We also built the score so that we can easily change these thresholds.

After that, we ascribed a weighting across all of the attributes, so it added to 100% of the score. Again, this is subjective but based on both what we’ve experienced in dealing with customers and renewal discussions, as well as what behavior or usage we want to drive. The below illustrates what a score can look like for a customer:

Per the prior images, I’m able to look at this detailed score holistically for an account, but also able to quickly look at the entire portfolio of customers and zero in on low scoring customers. We can quickly diagnose why a score is low, what we should focus on to build more engagement with a customer, and look for positive signals that a customer may be retained.

In the case above, I can quickly see that:

  • Usage is low BUT
  • They only kicked off with us 3 months ago (per the previous table)
  • They are a large enterprise, and are working with several cross functional teams to get through initial setup.
  • They’re only working on our primary product of Event Stream, and can see that ETL and rETL are future expansion discussions we can have.
  • We’ve had several calls in the last 30 days (4 total)
  • The total number of unique log ins in the last 30 days (7 unique users) is solid.
  • They have 27 total users attributed to their account.

Overall, they’re pretty solid and I’m much less concerned about churn, and know exactly what to discuss when they finalize onboarding.

This quickly became a first-stop view for me and the CSMs and TAMs to prepare for a customer call, a QBR, or a renewal strategy. We can easily get a sense for what a customer is doing, where we can push them to expand, and see flags of when to worry.

For example, if a customer has really high utilization, but they haven’t logged in for over 30 days, I’m not necessarily worried because some people look at us as an infrastructure tool that they want to “Set and forget”. We’ll check in with them but generally those accounts turn out to be fine.

Wow this is really cool. What else did you do to use the data?

Once we realized the power of what we had created, we pushed even further in operationalizing the data. I wanted this to be the foundation for how we ran the Customer Success function.

I set up a weekly meeting with our TAM lead and the CSM who had covered for me while I was on maternity leave to look at how these metrics line up against our KPIs. We have goals around 1) time to go-live, 2) customer health and 3) tickets. All of those metrics inherently roll up to GDR, which we still track separately with our Revenue Operations team.

It took several weeks for us to get the data exactly right, but we surfaced a lot of questions and figured out what data we really needed by doing this weekly. I could ask business questions and our TAM lead was able to go back and fine tune how we were visualizing the data. New questions or examples popped up throughout the week that also changed how we wanted to look at things. We finally settled on 3 distinct views that help run the team.

Onboarding and Go-Live

Sample of our onboarding customers over a specific period of time.

Looking at “Days to Go Live”, I have a quick view on the number of customers that are both in and out of the onboarding period. I can track our metric for a particular period of time, and when I look at “Details”, I can immediately see the customers who are outside of our onboarding period timeline, the ARR, and the account team.

In practice, I look at this and ping the CSM/TAM to ask what’s going on with the account. I have also started looking at any customer that is over the 45 day mark and make sure that customers are making progress and are engaged. We’ve had a few instances where customers have ghosted us and never kicked off, and I can highlight that with our Sales team.

Onboarding is a critical part of the customer journey and I’m now able to immediately spot issues before they spiral out of control.

Product Adoption and Usage

Additionally, we’ve added in a rolled up view of our product adoption metrics. Eventually, we’ll build in a historical view, but we’re still testing the weightings to see if anything needs to be adjusted.

One thing to note here is that RudderStack has gated products and features, which are significant portions of the score. For example, rETL as a product isn’t available to anyone in our Starter tier, so those customers will always get a 0 for that score. It’s a heavily weighted attribute for product adoption.

That means our Starter accounts will systematically score lower. I opted to keep this for now, because having only 1 pipe does create more limited product stickiness, and I believe that’s important to note. Again, this scoring matrix may change as we learn more about what indicates adoption and stickiness.

Tickets

Finally, we also have a weekly view of our TAM ticket performance. I can look at the team collectively, but can also immediately pull up individual TAM metrics.

Using this method, I can make sure that all of the TAMs have a relatively equal workload on number of tickets. I can also spot anyone with a spike in net new created tickets that may need additional support that week.

We sometimes have weeks where customers are making big sprints or are onboarding, so we can see big swings for a single person on number of tickets coming in. I can also track tickets open longer than 30 days. Those are ones that tend to be more complicated or have been escalated to our engineering team.

Our TAM Manager Dave will follow up with anyone with a high number of tickets or showing no change in those week over week to understand what the issue is. We’re also tracking engagement metrics — we want to see active conversation with customers on tickets so we look at total messages per day that a ticket has been open. We don’t want open tickets sitting and getting stale, and so we encourage a note back to the customer giving them a brief update that something is still being looked at. Slack is a channel that creates high expectations on frequency of communication, and we need to be able to adapt to that.

Summary

In summary, having a Customer360 table was a transformative step for our Customer Success team.

Even if you’re not at that step (and it took us 2 years to get here!), it can be helpful to start with an end in mind. When selecting tools, think about who will have access to the tool, and how many connections or integrations does the tool have.

Work with your product and engineering team to make sure you’re able to track product usage data. Also be willing to change direction. We are not afraid to put a stake in the ground and just start using it, just like we did for the product adoption score.

I pushed my team and intentionally opted to say “lets just start with this and we’ll figure out the remainder”. My view is if you feel like you covered 50% of the need, you’re good. You won’t know it all until you start using your data and asking questions.

You also can’t predict HOW you want to use the data. We didn’t initially have the detailed view on the onboarding metrics, but in the first meeting, the first thing I asked was what 2 customers were over the 90 day threshold and how big they were. We added that view the next day.

In short, be creative with what you want to measure, and make sure it’s attached to the behavior you really want to drive. I explained in a previous post that the ticketing metrics weren’t initially telling me the whole story, so I ended up using the data in a very different way than originally intended.

Data is awesome and the tables are cool, but it’s only useful if you’re using it to make decisions on how to run the team.

Next Up: I’ll talk about how we build automated processes integrated with Slack using this C360 table to help us scale.

--

--

Katie West

Customer Success Lead. I write about how to build a CS team from scratch and how to actually use data to manage your growth and team.