Member preview
Psst...there’s an audio version of this story. Upgrade to listen.

Solving Wanamaker’s Dilemma

We’re close to solving advertising’s fundamental challenges. But the solution — building an entire city from scratch — is audacious.

In our last column, we talked about the twofold challenges in solving Wanamaker’s dilemma, and why the ad tech we’re currently building won’t get us there. The challenges are:

  1. Attribution: Which ad (or ads) put a consumer over the top and convinced them to make a purchase
  2. Allocation: How to we take that attribution information and use it to most wisely spend our advertising dollars.

In looking at these two challenges, the first obviously presents the larger challenge. The potential solutions that get the most press these days are digital solutions: ad tech monitoring and reporting solutions from companies like Google, Facebook or, starting to think about the physical world, companies like Foursquare and Placed (now owned by Snapchat). The problems with these solutions are clear: they exist in a digital ecosystem, separate from where the big ad dollars are spent: on television. And, of course, given that we relied on them for nearly a hundred years, and that hundred years was a time of great advertising success, it would be unwise to discount print and outdoor ads as well. A truly perfect attribution system would have to not only tell us which internet ads worked, but which TV ads worked, along with radio, billboards, urinal ads, celebrity sponsorships, product placement in movies and more. Companies like Nielsen and Comscore know this, of course, and continue to battle each other for supreme, 360 degree dominance, and pursue ever more complete solutions. But they have a long way to go.

The solution to the attribution problem has long been known to us: to simply follow around every single consumer, and pay attention to every single ad they watch and consume, and track every single purchase they have. This has, however, been extremely cost ineffective, and, thus, the efforts on this more holistic front have been limited.

A brief history of the problem of attribution

They have not, however, been non-existent. A short history lesson is in order. When you see those studies saying we see 2,000 or 5,000 ads a day, they are usually doing some limited form of “following people around.” A whole mini-discipline of these studies have been stood up since the 1960s. We call these “single source” studies, because they provide a comprehensive analysis of one consumer’s ad journey. The first of these was undertaken by an ad man named Colin McDonald in 1966. It was the first single-source advertising study, proving that advertising had a pronounced short-term impact on purchasing decisions. McDonald did this by manually measuring and tracking the buying habits in 255 households in London. The consumers recorded every product they bought, as well as tracked every television show they watched and every newspaper and magazine that they read. McDonald was then able to track exposure to various advertising and track its relationship to sales. The effects were pronounced.

The study was a watershed moment in the history of advertising, and many follow-up studies confirmed the results and expanded on them.

The limitations, however, were clear from the getgo: longer term effects were not measured. That is, McDonald looked at several months, not the build-up of brand advertising over years and years. There were significant potential problems with externality — outdoor ads, in-store and more. The costs of conducting a study like McDonald’s were monumental, even with the limited media ecosystem of 1966. While some of the follow-up studies were also single source, many more were more meta, with less stringent, contained data gathering sources.

The limitations of the studies held back the field for a while, but in the end, McDonald’s research proved useful enough that an entire field was born. Nielsen and its ilk were primarily confined to television in the early days, but by 1991 Nielsen was incorporating McDonald’s single-source, revised techniques into its monitoring, and in 1993 presented results in the US, at a much larger scale, confirming McDonald’s findings.

One of the participants in the construction of Nielsen’s larger study was John Philip Jones, who has become something of a master of the field of single-source studies, conducting extensive research himself and also publishing several books and meta-studies of all single-source studies done.

However, by the late 1990’s, the events of cable TV, the internet, satellite radio and more made these studies increasingly cost prohibitive to undertake. And as these alternative media became more popular, it became less practical to measure all media in a household. These new media-fracturing limitations, coupled with the fundamental challenges of externality already inherent in this study approach, meant that by the late 1990’s, new data was hard to come by from single-source studies, and what data there was had become increasingly unreliable. It is this problem that Nielsen has been trying to solve, with limited success, ever since.

So, too, have ad tech companies, from an internet-based starting point. They, too have a window into only part of the problem. Competition between ad networks and platforms, privacy concerns, and the “real world” have all been barriers. Even if you could perfectly track a user’s journey through the entire internet, they could see a single TV ad, billboard, or product placement in a movie that inspired them to make a purchase. Not only would you not capture that purchase influence, your ad tech company may well mis-attribute the purchase to some dumb banner ad or Tweet that the consumer saw.

The fact of the matter is, as much as we feel that big brother is watching, no single entity is even close to monitoring our every media consumption point, and even if they were, none of them can follow us into a store and see what we purchase, much less if we purchase it in cash, on a credit card they don’t have access to, or if we are not signed up for a rewards program. Even if Nielsen, Google, and Amazon all merge they would not be there. Not in a category like candy (we all saw E.T. and suddenly want Reese’s Pieces) and definitely not in a category they don’t sell — large ad categories such as automotive and movies.

So it is, then, that in 2017 we find ourselves, ironically, less confident in advertising’s efficacy than we were in the 1970’s.

A solution near at hand

The solution, however, may be near at hand. It’s not, however, some tie-up of various tracking systems such as Nielsen is pursuing. In the end, it might be one where Google or Facebook manages to destroy another industry worth tens of billions of dollars.

What’s needed is a new McDonald-type single-source study, on a vastly larger and more complex scale. One that accounts for our infinitely more complex purchase patterns, and the vastly more fractured media landscape. One that tracks not only every television ad we see, but every billboard, urinal ad, product placement in a film, banner ad, social media tweet and AdWord. To undertake this would be a massive privacy violation, so the study would need to be opt-in, as McDonald’s was in the 1960s. The invasiveness of literally tracking a person’s entire life would require significant compensation. And, perhaps most dauntingly, it would need to be connected to our very environment: every taxicab ad, billboard, in-store display would need to be accounted for.

It seems an impossible undertaking.

Yet the day of reckoning may soon be at hand. Two entities with commercial and advertising interest these days are now speaking about literally building their own cities. In April of 2016, Google began investigating the possibility of purchasing large swaths of land to actually build its own city, so that its city-software subsidiary Sidewalk Labs could have a proper experimenting ground for its technologies related to internet access, self driving cars and city infrastructure. Then, in late 216, The New Yorker interviewed Y Combinator CEO Sam Altman and he mentioned that the massive tech investment incubator was also investigating such a possibility. Altman described the concept, “A hundred thousand acres, fifty to a hundred thousand residents. We crowdfund the infrastructure and establish a new and affordable way of living around concepts like ‘No one can ever make money off of real estate.’” The New Yorker noted that Altman stressed this was “just an idea,” but that he was already scouting cites.

While neither entity has spoken of the possibility, and I’ve talked to people with deep knowledge of these companies’ plans, both potential cities make an excellent petrie dish for a new, comprehensive single-source study on ad efficacy. From the 1980’s through the 2000’s, even monitoring the explosion of television channels proved problematic for ad efficacy studies. In a city such as these, the matter would not be easy, but would be completely doable. Software could be deployed to Google’s set-top boxes, and traditional television could be avoided (cable cord-cutting, of course, would be one of the trends Google would want to monitor). Radio could be done in the same manner, though it may not need to: companies such as Shazam already possess the technology to identify ads within terrestrial broadcasts. Billboards, of course, would be a simple matter when you controlled the city. Taxis, in-store retail and storefront could be handled by discounted lease terms to commercial tenants. And internet and mobile consumption would be feasible as well. Ads seen on your devices could be limited at the ISP level to those from participating networks. There may be some challenges with firehose deals and ad data from other social networks, but I suspect this could be accomplished at the ISP level if need be. Location services in the phone would monitor a user’s movements, exposure to outdoor and in-store advertising, and individual movies.

The city could even provide every user with a Google laptop or smart phone. A free laptop, phone, and high-speed internet might be enough of a payment that some users feel like they are getting a decent payment for their data — a rarity these days. If this isn’t enough, discounted leasing terms could also be an option.

The other side of the ad equation — purchase tracking — has never been the hard part, but e-commerce monitoring and smartphone bar code reading technology would make the challenge even easier. Deals with retail and loyalty programs would accomplish much, and barcode readers in the phones could handle the rest (users would be required to do this, since it’s an opt-in system, where they get compensated).

Google or YC (whose portfolio companies make a substantial amount of their revenue off of advertising) could either sell the data, make use of it for their own companies, or, if they’re feeling generous, publish it for free. If the data found that digital was indeed competitive to traditional advertising, it’s not unlikely Google would love to shout it from the rooftops. If it turns out that television is more effective (as I strongly suspect), well, then, owning the supreme, perfect measurement product may well be the path towards conquering television that Google has long sought. Nielsen’s market cap is a tenth of Comcast or Disney’s, but they have never been more than halfway there — they don’t own the city, they can’t monitor full ad exposure and, most importantly, they don’t track purchases very well.

Only by perfectly measuring an entire population’s media consumption and purchase habits can we ever hope to solve the attribution component of Wanamaker’s dilemma. This historically has been cost prohibitive. It may soon no longer be. The planet will spend in excess of half a trillion dollars on advertising this year. What if Wanamaker was really right? What if half of that is wasted? The potential gains are vast. And at the very least, the revenue potential from the data will more than pay for the city itself.

Still only part of the way

Of course, even as we solve the attribution portion of Wanamaker’s dilemma, we would still need to solve the allocation portion. We would need to act on this data. This would take time and would not be perfect for quite some time. Advertisers would need to iterate, to re-allocate, and try again. But with reliable data on attribution, this becomes a much more solvable problem. It will take time, and depending on the answers we gain from solving the attribution portion, it may take additional tools. But we will be on the path in a way we never have been.

In a review of his and other work on this subject, McDonald lists the criticisms to his approach. There are five: Lack of empirical evidence, recognition of other variables involved, psychological considerations, proven exposures (did I watch the ad, or was the TV just on?), and exposure to unmeasured media. A whole-city approach would completely address three of those, and could potentially address the other two. And of course, one city may not be enough. Cultural factors could be found to matter. This might need to be done in several countries. But that could come later. We’d still be much further along than we have ever been before.

What about the answers?

Then there is the question of what we actually learn. The dream has always been that, despite his pessimism, Wanamaker was actually right — that half the money is wasted. I suppose it’s not impossible that, upon undertaking this behemoth endeavor, we discover that yes, omg, we have been wasting a quarter of a trillion dollars and poof! That money is no longer wasted. I suspect the answers we find will be more complex. I suspect that far less is currently wasted. I suspect the centuries of in-house research sophistication at companies like Proctor and Gamble and Group M have gotten pretty good with the limited datasets they have.

Furthermore, I suspect that in a world where we perfect our advertising allocation and attribution, that perfection will be fleeting. Consumers will sense the perfection, rebel, and choose other brands. The Heisenberg principle applied to advertising.

Societal implications

And, of course, there are the societal implications, those that relate to our current conundrums of fake news. What if it turns out that reality TV and Facebook ads are our most effective ad mediums? What if it turns out the news is a waste of ad dollars?

What if it turns out that economies of scale matter, and that all of our advertising should be spent on a single platform?

What if it turns out that television is best, by far, and the migration of ad dollars to the internet, and mobile, ends?

What if we don’t actually want to know?

But when has that ever stopped Silicon Valley?

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.