How to Use Einstein Prediction Builder for Opportunity Scoring

by Anastasiya Zdzitavetskaya, Director of Product Management at Salesforce

One of the most useful predictions you can create for your business is predicting the likelihood of winning an opportunity (opportunity scoring). The higher the score, the more likely this opportunity is to reach the “Closed Won” stage. Next, you will probably want to know what amount this opportunity will close for — which we discuss in this blog. These predictions are very important, since they can affect three important KPIs — revenue, win rate and accuracy of forecasting. This blog describes how to define a use case, gather requirements, think through the problem definition and set up this prediction with Einstein Prediction Builder.

Defining a Use Case

Let’s look at a sample of the Einstein Use Case Worksheet. The worksheet helps walk you through some key concepts:

  • What questions does your organization need to answer?
  • What’s a good future to aim for?
  • What value are we going to drive?

Gathering Requirements

Now that we have identified your use case, let’s gather more requirements to ensure that you:

  • Build the right solution
  • Identify key stakeholders
  • Verify that you’re collecting relevant metrics and KPIs

Planning Your Prediction

To think through the relevant data to support these use cases, you can use the avocado framework (shown below), which aligns with the steps in the Prediction Builder wizard — you start with selecting an object, then decide if you want to focus on a segment and provide examples for Einstein to learn from.

  1. Datasetall records on the Opportunity object. Even though the dataset contain ALL opportunities, you can focus on a specific segment of data and exclude irrelevant opportunities — i.e. those in qualification stage. (Note, you can also use segmentation if you want to focus on a particular type of records — i.e. create a prediction for Enterprise Opportunities only)
  2. Positive Examples — Are there any data examples that are showing the behavior you want to find? In the Opportunity Scoring example, you’re looking for won opportunities. They reached Closed Won stage.
  3. Negative Examples — Are there any data examples that are showing behavior opposite of what you want to find? In the Opportunity Scoring example, these are lost opportunities. They reached Closed Lost stage.
  4. Records to Predict/Score — What records do you not currently know the outcome for, but would like to predict? In the Opportunity Scoring example, these are the opportunities in any other stages, since you do not know the outcome and you want to predict which ones are more likely to be won so you can prioritize these opportunities.

Fun fact: if you are wondering why it is called “Avocado”, here is your answer — the image below looks like a ripe avocado with a pit inside.

While every org may be a little different, enter information according to what the different data buckets would look like in your org.

Tip for Defining Your Prediction Set

You do not have to explicitly specify which records to score since all records remaining in your segment after example filters are applied will automatically become your prediction set:
Segment records — example set = prediction set (or records to score).

Be sure to use the data checker in Einstein Prediction Builder to make sure you have the correct number of records, including positive examples, negative examples and records to score. You can also use reports to verify this, if in doubt.

The diagram below illustrates the final setup:

Setting Up Your Prediction with Filters in Einstein Prediction Builder

With the Spring ’20 release of Prediction Builder, you can set up your prediction without an explicitly defined field to predict. With opportunity scoring, we want to predict the likelihood of an opportunity to reach “Closed Won” stage, but we do not have a checkbox field to represent this outcome. In this case, we can use special filters to specify what outcome is considered positive and what outcome is negative.

This is how we can set this up with Prediction Builder:

1. Select the object you’d like to make a prediction on — Opportunity.

2. Define your segment using the filter under “Want to focus on a particular segment in your dataset?”

3. We are answering the question “Will this Opportunity be won?”, so we need to select “Yes/No” type for prediction.

Note: the prediction will return a number which corresponds to the likelihood of winning an opportunity, but this is still considered a Yes/No prediction.

4. We do not have a custom field created that stores the outcome of opportunities closed won, but we have a picklist instead (such as Stage: Closed Won, Closed Lost, New, Quoted, etc), so we select the “No Field” option.

5. Next, we need to define positive and negative examples using the “Yes” example and “No” example filters.

6. Include relevant fields. We recommend including all fields as you might get some unexpected insights; however, there are a few exceptions discussed in this post.

7. Pick the name of the field where your predictions will be stored. This is the field that will represent the opportunity score or the likelihood of winning an opportunity. It will show you a number from 0 to 99.

8. Review and build your prediction.

Setting Up Your Prediction in Einstein Prediction Builder with a Custom Formula Field

Alternatively, we can set up opportunity scoring using a formula field. The first method (using yes and no filters) is preferred, since it minimizes the probability of an error when creating a formula field and allows defining all prediction elements within the prediction builder UI.

This is how you can define opportunity scoring using a formula field:

  1. Create a custom formula field returning text:

Custom Formula Field: Opportunity Outcome
CASE (StageName, “Closed Won”, “TRUE”, “Closed Lost”, “FALSE”, NULL)

  • “TRUE” is returned for positive examples — Opportunities in Closed Won Stage,
  • “FALSE” for negative examples — Opportunities in Closed Lost Stage,
  • NULL is returned for opportunities in any other stages (we will score those).

This is how the setup looks in the avocado framework:

2. In the Prediction Builder wizard, steps 1–3 are the same as above. At step 4, we need to specify that the field to predict already exists.

3. Next, select this new custom formula field “Opportunity Outcome as the field to predict, and select “Use all records that have a value for Opportunity Outcome”.

4. You can then continue with steps 6–8 listed above.

Next Steps

After you created your prediction, you need to review the scorecard.

If the quality of your prediction is too high, most likely you have hindsight bias and you need to eliminate potential leakers. For example, “Reason Lost” is an obvious leaker, since this field only gets populated once the opportunity is lost.

If the quality is too low, most likely you need to include more relevant data — can you create formula fields to bring data from related objects? Ask your business experts what data they would need to make this prediction — if it is useful for humans, most likely Prediction Builder can learn from it too. For example, you can add fields showing if this is a red account, number of severity 1 cases, % change in number of cases, customer success manager and solution engineer sentiment or assessment score, Account Tier, Account Health Score, average NPS score, lead product, and much more. Read more about prediction quality in this blog — Understanding the Quality of Your Predictions. To create the next iteration of your prediction, select “Clone” from the dropdown menu — it will save all your previous settings and you just need to make some small adjustments.

Do not forget to go to the Details tab of your scorecard. Examine your top predictors and validate that they make sense from a business perspective. Sometimes, you will find some surprising insights there — i.e. positive correlation shows which values of the selected fields correspond to a higher chance of winning an opportunity (positive predictive factors), while negative correlation shows which values of the selected fields are associated with lower win rate (negative predictive factors). Do not be discouraged if the insights are obvious — this only confirms that Prediction Builder is picking up the right patterns in your data.

When you are happy with the quality of your prediction, enable it to get the scores. To see the predicted values, add the Predictions field (opportunity score) to the list views and page layouts and optionally, add the Einstein Predictions Lightning Component to the page layout as well.

After a few weeks or months, you will get real-life data and you will know which opportunities ended up being won or lost. Then you can do a predicted vs actual analysis to understand how your prediction is performing on real data, using Salesforce reports or, if you have access to Einstein Analytics, you can use this Accuracy Template AppExchange package we developed for you.

Using Predictions in Your Business Processes

Here are some of the ways you can use this prediction:

  1. Create a list view and sort by opportunity scores, so sales reps can prioritize opportunities with the highest likelihood to close. You can also review opportunities in the middle range (those are your borderline opportunities) and identify steps to get them back on track.
  2. To show the top predictive factors — add the Einstein Predictions lightning component to the opportunity layout page, so users can see reasons behind these predictions.
  3. Use Process Builder to automate task creation for prioritized opportunities.
  4. Add opportunities with low likelihood of closing to the appropriate marketing campaign.
  5. Use Einstein Next Best Action to provide the right recommendations to the sales reps for each opportunity based on the predictions and business rules. You can get idea of what to recommend for each opportunity based on the top positive predictive factors in the scorecard. For example, if organizing an executive briefing is associated with a higher win rate, you can recommend executive briefings for Enterprise accounts, while providing a different recommendation for SMB (i.e. industry webinar).

How to Assess the Effectiveness of Your AI Project

How do you know for sure that this AI project was a success? This is where you can go back to your original goal and review your KPIs — Win Rate and Revenue. You can look at YoY changes, but the gold standard for assessing any intervention is to use a control group. For example, you can show opportunity scores only to a small group of sales people, while others continue doing business as usual and will represent your control group (just make sure these groups are quite similar and minimize any other external factors that can influence your outcome). If there is an uplift in the KPIs in the opportunity scoring group compared to the control group, congratulations — your AI project has made the world a better place!

If you’d like to review the full process for building and deploying predictions to end users, see this recent post.

For more on the latest when planning your prediction, check out the official Salesforce documentation.

Related Blog Posts

  1. Introduction to Machine Learning
  2. How to turn your Idea into a Prediction
  3. Einstein Prediction Builder Toolkit
  4. How to Use Einstein Prediction Builder to Predict Opportunity Amounts
  5. Which fields should I include or exclude from my model?
  6. Understanding your Scorecard Metrics
  7. Understanding the Quality of Your Prediction
  8. A Model That’s Too Good to be True
  9. Thinking Through Predictions with Bias in Mind
  10. How do I know if my prediction is working?
  11. Custom Logic on Predictions from Einstein Prediction Builder

--

--