Humans are predictable and that’s an ethical problem

A mastery of big data can tell marketers what is possible but they will still have to decide for themselves the right thing to do.

Today, the world is full of data. But insight doesn’t scale with data. Despite the promises of big data, it’s not as scary as the vendors of complex software would have you believe.

This is because human beings are depressingly predictable in their behaviours. Yet, the more we understand humans through the analysis of their data, the more we enter the complex world of ethics — something I am not sure the industry is ready for yet.

Predictable humans

Byron Sharp refers to law-like qualities in marketing in his book How Brands Grow. Data analysts know this better than most. There are two behavioural laws that accurately predict future behaviour that every marketer should know.

The first law is: the more you do something, the more likely you are to do it again. A study of 12th graders in the US identified that 5 per cent had taken cocaine, but more than 50 per cent of those who had taken it once or twice went on to take it a third time.

According to a study in Psychology Today, 25 per cent of respondents had undertaken an extramarital affair but, of those, 77 per cent had gone on to have another and 85 per cent of those a third.

The recidivism rate within a year for those without previous convictions is 15 per cent, but 28 per cent for those with one or two previous convictions. This pattern appears everywhere.

In marketing, we know that the more times you visit a website or use an app, the more likely you are to reuse it (Fig.1). The same is true for customer purchases — the repurchase rate increases based on how many previous purchases a customer has made.

At a high level, this tells us that there are probably too many loyalty programmes in the world, discounting purchases that were highly likely to occur anyway when they should be focusing on welcome strategies.

It strikes me as highly improbable that companies cannot explain the impact of their loyalty programmes in their annual reports and, instead, refer to member numbers and some take on the Pareto principle — as if that has any bearing on incremental value.

The second law is: the longer you leave it, the less likely it will happen. Not surprisingly, the longer it has been since your last jail sentence, the less likely you are to reoffend. Similarly, in marketing, the longer it is since a purchase, the less likely there will be a subsequent purchase.

Data is extremely perishable, and so are customer propensities to purchase. The best time to re-engage customers is immediately after a purchase. A prospect’s propensity to convert also diminishes with time (Fig. 2). One prospective client had an ad stock (the time it takes to receive half the impact from an ad) of seven minutes. I am not sure if he was joking. Given this is such a well-known rule, why are so few companies organised to exploit it? In an always-on world, you can argue whether the purchase funnel is still linear or not, but it’s harder to deny that the time from awareness to purchase has fundamentally collapsed.

Neither of these laws should be revelatory to a marketer — they have been known for 50 years. The rise of big data brings little new learning to contradict these fundamentals. But what happens when our mastery of data leads us astray? There comes a point where what we could do with customer data starts to diverge from what we should do. This is not a statistical question, but an ethical one.

Algorithms have no ethics

The London-based Swiss artists !Mediengruppe Bitnik’s recent art project, entitled “The Random Darknet Shopper”, is a case in point. A bot (a software application) purchased a random item from the darknet every week to the value of $100, and these items were displayed in a gallery in St Gallen, Switzerland. Items included ecstasy tablets, a fake Hungarian passport and fake designer goods.

The day after the exhibition ended, the bot was confiscated by the police. If bots are buying illegal goods, does that make the programmer culpable? Or the agency they work for? Or the agency’s client? This is new territory.

In a similar vein, much has been written about the algorithm responsible for choosing the cover photos on Facebook’s “Year in Review”. In a number of cases, people were faced with a cover that depicted deceased loved ones or their property on fire. While this might be a minority of cases, the consequences are serious. Algorithms are only as smart as the people who program them. And guess what? The people who program algorithms are not the greatest marketers.

Public opinion is shifting

We are at a delicate point in the evolution of analytics in marketing. What we can do and what we should do are not the same thing.

Public opinion on data privacy is forever shifting. One week, we bemoan GCHQ for monitoring private e-mails; the next, the government is criticised for missing three teenage girls leaving for Syria. We want our search engines, smartphones and social networks to remember what we did and personalise our experiences. But we don’t like the idea of being monitored.

The world is full of data. Please mine responsibly.

First published in Campaign magazine ( in March 2015.


Excerpted from the SXSWi 2015 talk found here: