​​CS Best Practices: Calculating GDR with a Rolling Forecast

Katie West
8 min readSep 15, 2023

--

Previously: I reviewed Automation and Scaling: Putting the C360 to Work to offload highly manual work on our team, and help reinforce customer education, engagement, and product awareness.

TLDR: Sales has established standards and best practices, which don’t exist in CS. I set out to define a new, analogous approach for CS.

Calculated GDR: You can create a product adoption and engagement score per customer. Create a probability percentage for a range of your score that reflects the likelihood a customer will renew, and then apply that to all open opportunities. You can do this in excel but it’s cooler if you put it back into Salesforce.

You get the risk-adjusted ARR per customer, but also can calculated the projected GDR per quarter on a rolling basis. This only reflects product usage and not sentiment, so generally our risk-adjusted GDR is coming our lower than our historical performance. But we believe this is solvable to improve our accuracy over time.

Renewal Opportunities Stages: Have calculated renewal ARR as part of the “Opportunity Identified” stage in salesforce. Separate out a stage where a CSM has reviewed a deal, and can override the prediction with their own churn/contraction/flat/expansion estimate based on score as well as sentiment and customer strategy.

Have a defined “Customer Engaged” stage so you can clearly understand who has been spoken to, vs what is still just a CSM assumption, and then break out “Contract Negotiations” and “Contract Closed” as normal.

Here’s the Long Form Story for the Readers:

In terms of best practices and operational excellence, a lot of time and thinking has been devoted to Sales. There’s clear thinking outlined on sales AE and pipeline coverage ratios, tools and practices like MEDPICC to standardize how to discover painpoints and value, and structure on sales stages to estimate potential revenue.

I researched and spoke to a lot of people in the CS world and came up short when searching for analogous standards for CS — so I set out to build those myself.

Given the data I had available in our C360 table, specifically our product adoption score, I first wanted to understand what my GDR was on a rolling basis. We were already targeted upcoming renewals 6 months out looking at customers with low adoption and intervening with strategic recommendations.

Calculated Rolling GDR

As a recap, our Product Adoption Score is a calculated value from 0–5 that looks at product usage numbers and engagement metrics and rolls that into a total value. We were using this as a proxy to assess the risk of churn for each account.

We focused on values that proved to be the stickiest and drive the highest value for our customers, and included product usage, feature usage, and engagement metrics. The values we looked at included:

Usage and Product Adoption

  • Contract Utilization: This calculated the percent of events used in the last month over the contracted allocated amount. We awarded points for this based on different bands of usage.
  • Below 40%: 0
  • 40–60%: 2
  • 60–80%: 4
  • 80%+: 5
  • Pipelines Enabled: This awarded a binary 0 or 5 points per product if a customer had enabled any of our three pipelines. This did not account for the volume of data per pipeline, only if it was enabled or not.
  • Number of Sources: This looked at the number of distinct sources that a customer had enabled. Sources included web, mobile, server, and various cloud sources. Customers were awarded 1 point per source with a maximum of 5.
  • Number of Destinations: This looked at the number of distinct destinations that a customer had enabled. Destinations included any cloud or warehouse destination. Customer were awarded 0.5 points per destination with a maximum of 5.
  • Destination Categories: This looks at the breadth of types of destinations a customer has enabled. We categorized all of our destinations into things like CRM, Marketing, Messaging, Advertising, Product Analytics, etc. Customers were awarded 1 point per category for a single destination in that grouping, with a maximum of 5.
  • Profiles Connected: This awarded a binary 0 or 5 points if a customer had our Profiles product enabled. This did not account for data volume or usage.
  • Tracking Plan Connected: This awarded a binary 0 or 5 points if a customer had a Tracking Plan feature enabled.
  • Number of Transformations: This awarded 1 point per transformation enabled, with a maximum of 5 points.

Engagement

  • Number of Ticket Created: This awarded points based on the number of Foqal tickets we had logged in Slack in the last 30 days. We awarded 0 points for no interaction, 3 points for 1 ticket and 5 points for >1 ticket.
  • Number of Calls: This awarded points based on the number of Gong calls logged in the last 30 days. We awarded 0 points for no calls, 3 points for 1 call, and 5 points for >1 call.
  • Number of Logins: This looked at the number of unique users who logged in during the last 30 days. We awarded 1 point per unique user with a maximum of 5.
  • Total Users: This is the total number of users registered to an account. We awarded 1 point per user with a maximum of 5.

Each attribute was given a weighting (totaling 100%) and the score was calculated for individual customers, as illustrated below.

With the entire portfolio of customers having a score, I then used our rELT product to pipe to send the score back into the associated Salesforce record.

Once in there, I first exported an opportunity report that included customer name, CSM/TAM, ARR, renewal date, and product adoption score. Initially I did this because TAMs didn’t have easy access to renewal date and status, so I was publishing an excel sheet for them as reference to sort through which low-scoring customers to reach out to.

Once in the excel report, I built a quick calculation with a probability of renewal based on a customer’s score. This was based on my own assumptions, but reflected our experience with churn risk as well as my level of urgency I wanted to spotlight on those customers.

The initial assumptions I made were:

  • 0 to 1.5: 20%
  • 1.5 to 2.0: 40%
  • 2.0 to 2.5: 60%
  • 2.5 to 3.0: 80%
  • 3.0 to 5.0: 100%

In the future, I plan to go back and look at historical performance and determine what the correct probabilities should be. In the near term, it seemed reasonable to view it as an index — my goal was to assess risk across future quarters, so I’d be looking at the calculated GDR on a relative basis and the accuracy was less relevant initially. I also think that the scores may be accurate in reflecting actual risk for account contraction and should be considered, but not be taken as gospel for a true forecast.

I wrote a quick formula to apply the probabilities to each renewal and calculate the “Risk Adjusted ARR” per customer, and then calculated the projected GDR across the portfolio of customers per quarter.

Surprisingly, this gave me a projected GDR around 65–70% for each quarter going forward. Historically, our team had been delivering ~85–95% per quarter, so I wanted to understand the difference in the values. My initial hypotheses (to be proven still!) are:

  1. This doesn’t account for sentiment or actual business value we’re delivering with product outside of event volume. Our contracts don’t currently have line items for highly sticky products that we are accounting for in our product adoption score.
  2. This includes some customers still in onboarding — if they just signed, they’re appearing in the quarter for next year, but have a low adoption score still.
  3. My probabilities for renewal are too pessimistic and should be higher
  4. The CS team is very effective at increasing scores and sentiment as we get closer to renewals.

I gave my excel file prototype to our Rev Ops team, and they were able to build this into Salesforce, and then produced a lovely dashboard in Sigma. We’ll begin to track this over time, but this will start to give me and our Revenue Organization much more visibility into the overall risk of forward looking renewals and GDR for the company.

Renewal Opportunity Stages and Criteria

In addition to our calculated GDR, I wanted to create some new standard around how we approached the renewal opportunity stages in Salesforce. I redefined the stages as well as the criteria to advance to the next stage.

Originally we had a very simple process with very loosely defined criteria to advance to the next stage:

  1. Renewal Identified: This is autogenerated when a contract first closes
  2. Customer Engaged: This is when a CSM actively looks at the deal and sets up time to discuss with a customer. This is also when a CSM determines if an upsell is possible, and would engage the AE.
  3. Contract Negotiation: This is when we’re in active contract negotiations with a customer and have given an order form.
  4. Contract Close: This is when all signatures are completed for the renewal.

Given our new calculated GDR, I proposed the following process:

  1. Renewal Identified and Calculated: This is autogenerated when a contract first closes and the GDR renewal probability is automatically calculated based on product adoption score.
  2. CSM Renewal Assessment: CSM should review all open opportunities at least 120 days out from the actual renewal. To move into this stage, the opportunity has been actively reviewed by a CSM to review the Product Adoption Score as well as customer sentiment/engagement and growth potential. CSM engages with TAM to strategize for outreach to increase score, if needed.CSM engages the AE for any expansion identified. Customer has not been engaged to discuss the renewal.
  3. Customer Engaged: The CSM has successfully set a call with a customer or has spoken with the customer regarding usage and the upcoming renewal. If unable to schedule a call, the CSM should initiate “No Response” CTA in Gainsight and change the forecast to “Churn Risk” and flag the account to the AE.
  4. Contract Negotiations: Pricing has been presented to a customer, and/or an order form has been sent out for signature.
  5. Contract Close: Signatures are completed on all paperwork.

In summary, it’s important to use the data available about your customers to try and understand the risk to the business and develop a more structured approach to renewals. This calculated risk adjusted ARR/GDR approach allows us to target highest risk customers not only with the lowest score, but those with the biggest ARR risk to the account. We can become more nuanced in our prioritized approach — a low scoring $10K account may be a concern, but there could be a bigger impact from a mid-level $250K account that could be overlooked if we went on product adoption score alone.

There’s few commonly accepted best practices or standards in handling the renewals process, but I believe there’s opportunity to define an analogous approach to what we’ve seen in Sales. Keep following to see how we’re doing it on my team =)

Next Up: I’ll review CS Best Practices: Deal Scoring to understand how landing our ICP and finding the right kind of deal can impact churn.

--

--

Katie West

Customer Success Lead. I write about how to build a CS team from scratch and how to actually use data to manage your growth and team.