How to Build the Personalization Stack

Silvio Palumbo
Mar 11, 2019 · 10 min read

As consumers’ digital footprints continue to grow, companies are increasingly trying to capture the information they yield to provide those consumers with ever-more-personalized offers. Powering a company’s ability to tailor the right message, at the right time, in the right channel, and with a high degree of orchestration, is what’s known as the “personalization stack.”

The ability to build a personalization stack, however, is often beyond the core expertise of many companies. Chief marketing officers and marketing directors often remain focused on peripheral issues such as org placement or technological complexity. But as a personalization stack is a diverse and intricate ecosystem of applications and tools — CMOs and marketing directors need to broaden their scope of understanding, to ensure that the various technical and functional components necessary to make it run are in place. As long as that ecosystem develops harmoniously, its complexity can — and indeed should — grow over time.

From MarTech Stack to Personalization Stack

When associated with technology, the concept of a “stack” refers to the combination of infrastructure, tools, and automation that work together to deploy applications. The tech stack is where applications run.

The so-called “MarTech stack” refers to the different layers of applications and channels required to run campaigns and marketing initiatives. It’s where marketing runs.

The “personalization stack” is a very specific form of the marketing execution layer, tailored towards analytics-driven initiatives that touch known customers on an individual level. It’s where personalized marketing runs.

Companies’ marketing organizations use the personalization stack to answer three critical questions:

1. Who are we targeting? It enables us to positively identify customers so we can track them over time.

Example: Jane Doe makes purchases online as a guest for two months before signing up to a loyalty program. She then gets married and changes her name to Jane Smith, keeps her phone number and physical address, but changes her email address. She makes purchases not just for herself, but for her family as well.

Once a customer like Jane is known, we can market to her by tying her behavior to the same unique ID, all while retaining the richness of her different roles (e.g., purchaser for herself and for others, decider, influencer, etc.). (See Figure 1)

1. What touchpoints do we need to use? It enables us to choose the most effective touchpoints (be they incentive-based or not) across time and channels in order to maximize the marketing return generated by each customer ID.

Example: Jane Smith purchases infrequently, typically during the first week of the month. She opens some of her offer emails, but not all of them, and hardly ever responds to discounts on products she hasn’t purchased before.

We want to target Jane immediately prior to her most likely purchase episodes, and steer her towards an offer for a new item that includes products she’s previously purchased (e.g., buy items A & B and unlock a discount for item C).

2. How should we deliver those touchpoints? It enables us to physically and/or digitally produce and deliver the touchpoints we’ve identified, with the level of speed, customization, and dynamism needed to maximize both the channel and the approach.

Example: Jane reads email haphazardly but frequently browses online, indicating intent to buy.

We’d want to time an offer to Jane based on her browsing behavior, and would ideally choose a dynamic banner ad that’s triggered only after a 5-second impression on a product detail page.

As you can see, the complexity increases exponentially as we progress from segmented to personalized marketing. There is more data to process and interpret, and more iterations needed to derive the next best action.

A technical journey for building a next-best-action platform

To determine the next best action to take in a personalized marketing plan, three core components are necessary: the input layer, the inference layer, and the intelligence layer. (See Figure 2).

The Input Layer

Inputs are pieces of information about the customer and the product. We want to collate every relevant piece of information about a customer’s past and current behavior, with clarity around the journey over time (including the disambiguation and de-duplication of IDs) and every touchpoint along the way (whether they involve inquiries, browsing, purchasing, etc.). We also need a well-structured view of the product hierarchy, ideally enriched with SKU-level profitability (not just price) and product features/descriptors that can be used to run lookalike and clustering techniques.

This layer captures raw data but also transformed data coming from other components of the personalization stack. For example, it can capture a customer table with basic ID info (unprocessed data like personally identifying information) enriched by processed data such as individual responses from prior campaigns and predictive scores of individual propensities (to buy, to engage, to fade, to trade-up, etc.).

The Inference Layer

An inference is a conclusion informed by past behavior and/or a prediction of future behavior. For example, the recognition that Jane Doe and Jane Smith are the same person, her last-time value or progress within a specific journey, or the belief that she’ll buy product X in two weeks but is unlikely to open emails within the next month.

The last one is the realm of machine learning applications, in a sub-layer known as the ML layer.

Now, is machine learning a form of intelligence or a simple form of inference? Semantics aside, machine learning in isolation is hardly prescriptive and doesn’t fully inform the next best action. We’d need more elements of the personalization stack to surface prescriptive recommendations (and I’m generalizing here, for simplicity).

The Intelligence Layer

Intelligence is the ability to prescribe a specific action in order to maximize a desired behavior. That prescription, or “next best action,” is ideally in the form of a tactic (e.g., offer a discount on a bundled purchase), a context (e.g., after a 5-second ad impression), and a channel (e.g., online, followed by a reminder email). Some non-obvious considerations:

>> Marketing ideates the tactics. The more codified, modular, and structured those tactics are, the more tractable the optimization problem becomes.

>> A basic next best action is to rank each possible tactic by expected value. The probability of conversion multiplied by the profit generated by the purchase is what’s known as the “expected value.” Ideally this is ranked for each customer, over each channel, and across each context. However, that can easily yield an unmanageable number of permutations, which will require simplifying many assumptions.

>> The most compelling optimization approach is ongoing experimentation. Machine learning can’t optimize every permutation algorithmically. The secret sauce is iterative controlled experiments — think ongoing A/B tests on steroids. And the experiments require some form of segmentation (e.g., group A, which receives three levels of discounts, is then compared to control group B, after which the winning discount level is selected). Yes, to personalize we still run segmented experiments. That’s the irony of statistics.

>> Business rules play a very significant role. Examples include eligibility, anti-conflict, and anti-repetition rules that often override the original ranking by expected value. As the business rules layer becomes more sophisticated, modularity, UI, and performance at scale become more mission-critical than refining the ML layer.

If this sounds complicated, that’s because it is. Personalization is not just about any differentiated touchpoints; it’s about the optimal differentiated touchpoints for each customer, at each point in time. Optimal implies maximizing an objective function (e.g., lift, conversion, spend). To that end, marketers need to concurrently deploy strong inference (ML and features), rigorous experimentation, and a reliable business rules layer.

The Execution Layer

Once we have defined the next best action (who we are targeting and what touchpoints we need to use), we’ll need to figure out how were are going to deliver those personalized touchpoints. This is also what’s known as the marketing execution layer. The main ingredients are:

>> Offer schema that translates the marketing tactics into a structure that a platform can ingest, for example, a table with offer type, duration, level of discount, number of hurdles and/or steps, level of difficulty, etc.

>> A formula for creating the offer, such as code that takes a customer’s information and converts it into the data schema described above

>> A configurable workflow engine to create customer journeys (e.g., “If customer does X, send offer Y; if the customer takes no action for two days, send a reminder email”)

>> A workflow monitor that checks the status of each live workflow (e.g., “Has customer A received the offer? Has customer A made a purchase? Is it time to send a reminder email?”); the more complex and contextual the workflow, the more advanced and real-time the state monitor engine needs to be

>> Ways to surface the action, such as an API, digital assets, an electronic service provider, etc.

To contextualize, below is a logical flow that shows how each component is activated in the process of understanding the customer, defining the next best action, and building/surfacing that action. (See Figure 3).

Reality vs. Perception

We survey organizations across different industries around their current Personalization efforts, in terms of progress, challenges, and aspirations.

The self-assessment is conducted across four pillars, touching on both hard and soft skills: Strategy, Analytics, Technology and Ways of Working.

The average perception is that organizations are somewhat analytically ready (2.7 in Data & Analytics), but lag behind the thinking and the strategic alignment to push Personalization (only 1.8 in Strategy and Use Cases); this is not aligned with our experience, signaling a lack of understanding of the complexity and technical depth required.

De-averaging across industries does not change the picture, but it’s interesting and quite telling to note that Tech companies lead by a big margin: they are better suited at developing a Personalization Stack, which is a key enabler of execution.

Based on our survey, 69% of respondents expect at least a 6% revenue lift from Personalization, making it a clear priority, and a large investment ($5–9M annually is the median range) — underestimating the complexity of developing 1:1 marketing capabilities leads to budget over-runs, disappointed stakeholders and internal frustration. The most recurring internal hurdle is the ability to “prove value fast”.

When tasked to identify key gaps, most organizations would point to technical gaps, lack of analytical talent is only #7 in terms of major barriers (See Figure 5).

It’s a fair assessment, but practitioners in the field would supplement with other considerations.

>> Adapting the Ways of working is a priority — Building a multi-functional team does not guarantee effective integration per se, and many organizations underestimate the complexity of running large programs at the intersection of analytics and marketing technology

>> Experiments are a core pillar of data-driven marketing, not opportunity costs — Machine Learning layers require rigorous test & learn to converge to an optimal answer, and it’s an ongoing process, not a one-off exercise; less than a quarter of the organizations we surveyed believes they’re running ongoing experimentation

>> Embracing personalized marketing is a journey — the Personalization Stack requires co-existence of different modules, but complexity should grow over time; firstly because it pays to start small, prove results, then scale; secondly because experimentation delivers ever-increasing results over several months, and the analytical roadmap easily spans years

Closing considerations

Sponsors of personalization programs should understand the cross-functional nature of building and deploying 1:1 marketing capabilities. The business considerations that drive funding, approval, and impact goals include:

>> Several building blocks need to be in place concurrently. Understanding the ecosystem, planning investments accordingly, and embracing personalization as a journey, not a big bang approach, is key.

>> Personalization is a cross-functional effort. Marketing defines the tactics, data science defines the optimization routines, digital/IT build the pipes and the architecture, system integrators and contractors deploy an ecosystem of applications and tools. To accommodate a truly collective effort within the organization, ways of working need to change.

>> Fully integrated platforms in the market don’t cover the entire personalization stack. While many vendors can cover portions of the stack, it’s an integration effort with a high degree of complex custom coding. The emphasis should be on retaining the IP that really matters (ask yourself, what’s the secret sauce of my personalized marketing?), internalizing some of the skillsets, and relying on applications/vendors to execute the rest.

>> Talent is the real shortage. Few organizations can attract the level of talent required to execute every aspect of optimized 1:1 marketing; most find it challenging to fill data science, data engineering, and marketing analytics/technology roles. For additional thoughts on this topic, see in my article, “Building an Analytics Organization for Your Personalization Program.

And finally, the analytics journey is extremely complex — and fairly sequential — in nature. On that note, I will follow up with a separate article on the nuances of approaching 1:1 marketing as a tractable data science problem, especially around “cold start” and realistic timelines to embrace advanced AI-driven marketing.


GAMMAscope - The Blog