Here’s What You Need to Know About Propensity Modeling

While it’s becoming increasingly commonplace, not all models are created equally.

integrate.ai
the integrate.ai blog
6 min readAug 28, 2018

--

Marketers spend a lot of time talking about the importance of getting the right messages to the right people at the right time. It’s the Holy Grail they’re striving for in an age when hyper-personalization has become mission critical. Of course, pulling it off isn’t easy. That’s particularly true when you consider that many marketers still practice what amounts to a one-size-fits-all, spray-and-pray approach to engaging prospects and customers.

One tool marketers can use to overcome that challenge and drive greater personalization and better business outcomes is propensity modeling. If you’re not familiar with the term, it’s the application of mathematical models to data to try to predict whether someone will take a particular action. In other words, it’s a way to identify who among your audience is most likely to actually make a purchase, accept an offer, or sign up for a service.

Knowing how likely a person is to engage, and under which circumstances, means that you can focus your resources and efforts on the individuals for whom engagement will generate a meaningful change in behavior. That allows you to target them with very specific personalized products, messages, and offers to give them the nudge they need to pull the trigger.

Propensity modeling dates back to 1983 (and its logical extension, uplift modeling, to 1999), but it’s only in the last few years that machine learning has unlocked its potential. In fact, today most companies with a good data science team and access to the right tools can create comprehensive models on their own (although they rarely include the kind of feedback loop that’s necessary for continuous improvement since that’s usually the product of engineering). And, even if your company lacks that expertise, you probably have access to basic propensity models through your existing CRM and marketing automation platforms.

With enough data you can develop highly accurate propensity scores. Armed with those scores, it’s possible to not only understand the probability that an individual customer will transact, but also estimate what you expect the value of that customer to be.

Extending this concept to uplift modeling — which takes propensity modeling a step further by making a comparison of conditional probabilities to convert with and without treatment — you can estimate the “uplift” in ROI as a result of a specific marketing activity, such as using a particular message, offer, or discount. Sophisticated marketing teams can use propensity and uplift modeling to streamline their sales funnel, effectively turning marketing into a finance exercise. Specifically, once they can estimate the value of any given customer, they can treat that customer accordingly to maximize ROI.

Not all Models Are Created Equally

So problem solved, right? Shouldn’t every company be able to target the right people at the right time with the right messages? Well, not quite. The fact is that many companies tend to run into trouble with propensity modeling, and we highlight three main reasons why below.

The first is that the propensity models they use have shortcomings. The ones you can get from your CRM or marketing automation platform, for example, are scalable but they’re not very robust in terms of the quality of the predictions they can make. In fact, they often rely on a small number of basic features that are typically limited to customer data and campaign-specific transaction history, but that overlook broader transaction history and activity data.

Meanwhile, the home-grown varieties that many internal data science teams create aren’t necessarily scalable or robust. The major shortcoming that both of these types of propensity models share, however, is that they’re usually static, meaning that they don’t become more accurate over time as they’re exposed to more data, or adapt to changes in the underlying patterns of that data.

Another reason why many companies aren’t successful with propensity modeling is because they either stop at propensity modeling or they don’t action their propensity scores efficiently. For example, they might model those people who are most likely to respond and only target them.

The problem is that some people don’t like being targeted and others will buy anyways. In the first case you are actually catalyzing a negative response. In the second, you’re just wasting money. Good marketing teams will extend their propensity models to encapsulate the incremental impact of being targeted and only target those people for whom the incremental response is better.

Finally, other companies fall down with execution. They simply fail to adapt their operations to what the models are predicting, so they don’t reap the full extent of the benefits. Many companies aren’t structured to quickly act on a prediction in real time, deliver a dynamic treatment to the customer, reliably measure specific outcomes, or continuously refine predictions and treatments.

Now that we’ve taken a look at some of the reasons why companies aren’t as successful with propensity models as they could be, let’s look at the characteristics that the most effective models have in common.

4 Qualities of Great Propensity Models

For a propensity model to be truly effective, it has to be dynamic, productionized, scaleable, and able to demonstrate ROI. Let’s take a closer look at each of those qualities:

  1. Dynamic. Great models change over time as new data becomes available so that they can become smarter, more accurate, and evolve with underlying trends in the data. For that reason, it’s important to have a data pipeline and feedback loop that you can use to retrain your model on a regular basis.
  2. Productionized. Dynamic modeling requires a robust data pipeline for regular data ingestion, retraining and validation, and deployment. Great models will deliver predictions into business processes so they are understandable and actionable (often in real time), and will measure and evaluate model performance over time.
  3. Scalable. At many companies, models are built for use in a single campaign and then abandoned. Alternatively, the company might take the time to build a new model for each of its campaigns. Neither of these options is scalable. Effective propensity models have to be capable of producing large numbers of predictions and they need to be able to be easily adapted across similar scenarios elsewhere in the business.
  4. Demonstrate ROI. The best models don’t stop at propensity. Instead, they help you determine if the return you’ll derive from getting a prospect to take a desired action merits the investment you’d need to make. They should help you optimize your sales and marketing funnel to drive the greatest efficiency possible.

Unfortunately, most models don’t fully meet these criteria and therefore aren’t as effective as they should be. That ultimately inhibits companies from being as successful as they’d like to be.

Model Your Way to Smarter Sales and Marketing

So what should you take away from this post?

First and foremost, be aware that although lots of companies use some form of propensity modeling, not all are doing it as well as they could be. The reality is that propensity modeling is only as good as the end-to-end solution. Powering this solution with machine learning in a way that’s dynamic, productionized, and scaleable can deliver huge value lift and lead directly to ROI. Perhaps most important of all, a good model needs to be able to harness huge amounts of data in near real time as part of a continuous feedback loop so that it’s always getting smarter and helping you hone your marketing efforts.

Last, but certainly not least, it’s also critical that your company’s operations are actually equipped to act based on what your models predict. This is a topic that we’ve touched on here and that we will dive into in more detail in a future post.

--

--

integrate.ai
the integrate.ai blog

We're creating easy ways for developers and data teams to build distributed private networks to harness collective intelligence without moving data.