CS Best Practices: Deal Scoring

Katie West
6 min readNov 20, 2023

--

Previously: I shared Details on Renewal Forecasting with a Calculated GDR based on Product Adoption Score.

TLDR: Start rating how fit a deal was to be successful with your products.

· You’ll define critical attributes within the account that are necessary for a customer to be successful with you

· You can quantify the risk and impact if you don’t account for those issues.

· Sales can begin addressing key points in the pre-sales process

· CS can develop mitigation plans to address any red flags early on in the contract

Taking a platform solution to market is a lot more difficult than a point solution. At RudderStack, we had an initial hypothesis on our customer pain point that guided our product development, and that proved to be correct. As we grow, we’re constantly evaluating the question of “Who is our ICP”? It’s a complicated question because we often have an executive/buyer who is using budget to purchase our product, a champion who is advocating for the product and understands the value, a user who is in our product implementing it on a day to day basis, and also business stakeholders who have business requirements and dependencies on the data we move.

We’ve gone through several iterations of trying to understand the qualities of what makes a customer “successful” with us. The first challenge is to define what “successful” looks like. If you ask different people at RudderStack, you’ll get different answers. I imagine our Head of Sales and Revenue would likely talk about account value and expansion, while Product may talk about adoption and usage of features.

In CS, I think our team is in a unique intersection of teams because not only do I care about renewing a client and maintaining contract value, I also am responsible for making sure customers actually use the product and get the business value they originally bought us for.

I’m also responsible for showing them new ways to use RudderStack to unlock additional value they may not have considered before.

Given all of that, I wanted to get a better understanding of what deals were a good fit for us. Was it a good sale and was the customer set up to really get value out of RudderStack? Given the last few years of experience, we had a pretty good handle on what constituted red and green flags when we kicked off with a customer, but we had yet to correlate that with the customers progress.

I had was using the Product Adoption score as my primary metric for what customers are successful — you can read more about it in my previous posts. It included a metric for value relative to the commercial aspects of the contract, as well as product usage data, and also engagement metrics to see how actively customers were using RudderStack to potentially solve new problems.

I developed a “Deal Score” so we could start to understand if the attributes I suspected were green/red flags were correct — did a lower “Deal Score” predict a lower product adoption score later on?

My first pass at a deal score was a simple ranking of 1 to 4, and I asked the CSMs/TAMs to evaluate and rate a deal within 30 days of a kickoff. Sometimes we cannot assess a customers capabilities until we speak to them a few times. All of the deal scoring is captured in Gainsight, and then sent back into Salesforce.

My hope is that a full review of deal score can help us identify attributes of a best fit customer, but also highlight things we need to diligence more in the sales process, or find ways to solve those issues with our customers early on so they have a best chance of being successful with us.

The first iteration of deal scores had the following summary definitions (these are more detailed internally, but not shared here):

1. Low Fit: These are deals where customers may have challenges based on resource availability, technical infrastructure challenges/gaps, pain point, and ability to execute RudderStack implementation.

2. Fair Fit: These customers may have limited use cases, or more limited ability to deploy RudderStack broadly.

3. Good Fit: These customers have a good ability to implement RudderStack and will likely see near term value. They may have more narrow use cases.

4. Great Fit: These customers have the ability and resources to broadly implement RudderStack as a platform, and are driving toward business-wide value realization.

Our team has gone back into Gainsight and retroactively rated all of our existing customer base deals. We’re planning to rate any new deals as they come in.

What to Do with Deal Score

My first goal is to look at Product Adoption Score across Deal Score, by service tier. I want to know if there’s a correlation between product adoption and deal score — do I see higher adoption scores with deals rated as a 4? Essentially, I want to validate that my attributes for a great-fit customer are correct.

When looking at these cohorts, I do see the expected trend that customers with higher deal scores have higher product adoption scores — this indicates that the attributes for the deal score criteria are likely pointing in the right direction.

I also wanted to look at how much ARR was in each category. To me, this was a signal of risk. I already looked at these customers via the product adoption score and am addressing this via my team, but it’s helpful to quantify the risk for the rest of our revenue org. This gives tangible feedback to our pre-sales team on what to ask about in sales conversations and how to conduct our technical diligence differently.

Finally, I’m going to do a churn analysis to see if our churned customers are correlated to lower deal scores. With this, we can dig into individual deals to understand why they churned, and see if there are preventative measures we could have done to change the situation. This will also be great feedback for sales to see key points to include in the pre-sales process.

Tactical Mitigation Steps

Next, after understanding how much revenue was in the 1–2 bucket, I wanted to understand what we could do as a CS team to help address those issues. With product adoption score, we’re taking a very product centric view on how to help customers — we’re developing custom recommendations on events to instrument, highlighting additional properties needed for better analysis in downstream tools, and unlocking new pipelines to drive them up the data maturity curve for the full tech stack.

For lower scoring deals though, those customers are facing different problems. One major factor I can start to influence is around bandwidth. If customers are struggling with bandwidth or resources to implement RudderStack, I’m starting to work closer with our partnership team to introduce an SI partner. We’re also exploring options to build a network of individual contractors that could be more helpful for our cost-constrained customers who still need help.

Additionally, some customers have more limited knowledge on the CDP space. And this isn’t a bad thing! You can’t expect everyone to know everything and immediately know how this all fits together. It took me 2 years to REALLY understand what we’re doing, and I do it every day. All of our customers understand the value proposition of consistent, real time data flowing into the various cloud destinations and their warehouse, but they may not be experts in the nitty-gritty data architecture design and execution. This is where we can develop content and education materials to help — we’ve created Loom videos and content along with what our Marketing team produces to help customers. We also are more actively offering CSM support and Solution Architect time to help customers who need it.

The reality is your first customers are early adopters and they will likely be more adept at your product. As you grow and scale, you’ll naturally have more customers who are lower on the institutional knowledge and will require more educational support to get up to speed.

RudderStack is also a warehouse-native CDP solution, and some of our customers don’t have a warehouse. In these instances, we immediately help our customers understand the value of the warehouse and even guide them in the procurement process in thinking about which warehouse is going to be best for their needs. We also focus on the direct Event Stream connections to make sure customers are realizing immediate value by sending data to cloud-destinations.

Summary

Deal Score is a pretty new concept, but it’ll create a great signal for the Customer Success team on how to identify riskier customers earlier. It also will quantify risks for our Sales team to identify and address prior to contract close. It will help us build greater conviction on who is the right customer for us, beyond the typical things like company size and industry vertical. We can start to understand and quantify more operational and educational factors that impact our ability to successfully partner with a customer.

Next Up: I’m going to review tactical approaches we take to Improve Product Adoption on Low Usage Accounts

--

--

Katie West

Customer Success Lead. I write about how to build a CS team from scratch and how to actually use data to manage your growth and team.