Data Science Can’t Replace Human Marketers Just Yet — Here’s Why

Kim Larsen
6 min readMay 31, 2018

--

With the surge of data science and automation in marketing, it seems like growth marketers should start polishing their resumes and look for new lines of work. Tasks such as finding the most responsive people to target and bidding on keywords can be performed cheaper and better by statistical models and algorithms.

Nonetheless, we still need humans. Because when it comes to higher level decision-making — i.e., setting targets and throttling spend across various channels to hit those targets — there’s no single dashboard, research, or attribution model that can tell us exactly what to do. All models are wrong about reality precisely because they are just models. In order to make the bet, you have to look at a wide array of inputs, condense them, and apply gut feel.

So which analytical inputs are key for optimizing and planning growth marketing spend, and what are the pros and cons? I think that you need at least three types of inputs to make the most informed bets:

  • Experimentation
  • Bottom-up attribution models
  • Top-down time series models

In this post, I’ll provide a high level view of these and cover the respective pros and cons.

Experimentation

When evaluating the health of a company, one of the common metrics is CAC (customer acquisition cost), which is simply the money spent on acquiring customers during a given period divided by the number of customers acquired. If we think about it, this is not really a good metric. For example, what if marketing didn’t cause any new customers to join and all new customers joined because of word of mouth or self-selection? In this (admittedly extreme) example, CAC might look healthy but we’re totally wasting the marketing budget ($0 return).

So how do we find out how much marketing is affecting acquisition above and beyond the natural inflow of new customers? The best way to isolate the impact of growth marketing is to run an experiment where the “control group” is exposed to business-as-usual marketing and the “test group” gets no marketing. This could be an A/B test at the individual level or a matched-market test.

Perhaps even more important for day-to-day decision making, experiments can also be used to measure the marginal impact of marketing which provides another data point from the underlying marketing response curve (see illustrative example below).

Here’s a quick overview of different types of experiments:

Matched market testing

The idea here is simple: select a group of markets that get exposed to the “treatment” and then compare acquisition to a synthetic control group of similar markets. This provides a flexible testing environment where we can test hypotheses that require coordination across disparate channels. The downside is that it’s not as easy as it sounds: you typically need more markets than you think to get good detectable difference and it can be hard to coordinate the test and keep the control cities clean. Also, most customers are typically concentrated in the largest 20–30 metro areas which limits the number of tests that can be run concurrently.

Channel level A/B testing

Some channels (e.g., Facebook) provide the ability to run A/B tests at the individual level. This is a more precise testing methodology than matched markets because you don’t have to deal with market bias (matching is never perfect) and post-hoc causal inference modeling. However, you’re limited to testing the impact of a single channel — controlling for natural acquisition and impact from other channels — which is still an incredibly useful test!

Bottom-up attribution models

The most prolific solution out there is the infamous last click attribution logic, which is wrong for so many obvious reasons. But it does have some nice properties: (a) it’s simple and transparent, (b) it creates mutually exclusive “channels,” which is convenient for cohort analyses, and (c) since last click attribution is consistently wrong it can be used for granular response tracking as long as we ignore the actual scale of the Y-axis.

Multi-touch attribution (MTA) — which tries to deal with many of the shortcomings of its disgraced last click counterpart — is often billed as the savior of measurement. It’s not. Not yet at least. In reality, these models are rarely implemented into the regular decision framework and are more likely to end up on the shelf.

Now, the basic idea behind MTA is to assign partial credit to impressions using rules-based or data-driven (using ML) weights and factor in time decay. On the surface this sounds like a completely reasonable mental model and so why do these models tend to end up on the shelf? The reasons are twofold:

  • Collecting the full dataset of impressions across all key channels is difficult due, in part, to the various walled gardens of data (for example, but not limited to, FB and Google).
  • It’s not unlikely that your first pass is a nonsensical black box. Recovering from a cold start can be hard when it comes to model adoption.

Both Facebook and Google have launched products that allow you to analyze the data behind the wall and combine that data with impressions from other channels in a neutral database (through special tags). This is an encouraging development although you’d never be able to extract or see the raw data behind the wall.

Having said all this, I think that having some sort of simple MTA is a good idea. At the very least, it’ll help overcome the gross overattribution of demand-capturing channels (such as branded paid search) that is common when assigning credit to the last click, and it can factor in the time-decay of impressions. Just set realistic expectations of what MTA can do for you and don’t let perfect be the enemy of good.

Top-down time series models

These models are often known as Media Mix Models (MMM), but they’re really just explanatory time series models based on weekly or daily acquisition volume (or some other metric). The basic idea is to take a bird’s eye view (top-down) of the relationship between marketing and acquisition instead of trying to match new customers to individual impressions or clicks. Specifically, MMM leverage time series regression to quantify the causal relationship between marketing and acquisition — controlling for seasonality, trend, and other internal and external factors.

This approach has a number of key benefits:

  • The model is not relying on individual click or impression data, and we can include offline and online levers in the same model.
  • Since it’s a time series model, we can use the model to guess how much we need to spend, say, next month, to hit the acquisition target. We can also use this model to set realistic targets to begin with.
  • Using non-linear transformations in the regression, we can get a sense of diminishing returns at different levels of spend. Essentially, we can use these models to approximate the response curves (see illustration above).

While this sounds exciting, media mix models — as with anything that involves uncertainty — also come with a list of problems and limitations. First, the models tend to be fickle and unstable. I would not trust these models without any experiment-based results to back up the key insights provided by the model. Second, despite efforts to control for non-marketing factors, you have no guarantee that the model can distinguish correlation from causation. Third, the level of granularity you can get with a model like this is limited — you can only stuff so many variables into these regression models. And, last but not least, these models typically need at least 1–2 years of data, which can lead to stale response curves based on data that reflect outdated strategies.

Putting it all together

The key message of this post is simple: when optimizing growth marketing spend across channels, don’t look for a single model or tool to provide the answers. Whether you like it or not, a multitude of models and test results need to be considered and they may be inconsistent.

Here’s a brief summary of the most critical inputs:

For more details on these, this post by Jai Ranganathan is a good read.

More where this came from

This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love.

Follow our publication to see more product & design stories featured by the Journal team.

--

--