Back in 2013 I was backpacking through the UK. At the end of the trip, I happened to be at a hostel in Brighton. I didn’t carry a laptop with me so I just used my phone whenever I needed it. It was the end of July and it was time to go back home (and my birthday was coming, I just wanted to celebrate it with my family).
So I used my phone to book a flight. I remember back then we used eDreams and I couldn’t believe what I was seeing… The price of the flights doubled! Which is normal in summer. But hey, they didn’t just doubled, they quadrupled! I couldn’t believe it. Prices 2x I could understand it, but 4x?
Common, give me a f*cking break.
I used the computer they had at the reception of the hostel, and it turned out that prices didn’t quadruple. Yes, they doubled. But if you’d use your phone to book it, they’d double it again. Back then they knew that, if you were using your phone to book a flight, you were in big trouble. And they were damn right.
I was pissed off. And of course I didn’t use eDreams anymore.
This was one of my first contacts (consciously) with privacy inequality. I saw at a small scale the power of dehumanizing the people who can suffer the most from it.
As I often stress on my articles, advertising is a disease — a disease that thrives in the attention business. It’s the most obvious and less regulated aspect of one of the worst inequalities we’re going to experience. (Or better said, we already experience.)
I’m a marketer by training (a good and honest one), so my goal with this article is to give you some perspective of what got us here, where we are, and where the industry is going. And what this means for the consumer and the dangers of ignoring this new reality that’s been imposed on us without our consent.
What, then, got us here?
What is the nature of this disease?
How is it contributing to widen the privacy inequality gap?
Privacy invasion is old hat.
Back in the 1950s major advertising firms hired anthropologists to “observe” consumers behavior. They spied on consumers and measured their reactions to sell more stuff — which led to include babies in their ads. And if babies weren’t effective, doctors worked flawlessly. The truth is we’ve been tracked for decades. Phone providers have been invading our privacy for a really long time. Or consider the way banks have been collecting our data every time we use our credit cards. All that data which might seem isolated and without context in many cases, gets out of hand when conglomerates like Acxiom collects all those little dots of data and put some context on them.
Back in the 50s they had to rely on raw data, but also on their limitations to capture that data. Here we’re talking about stuff like: how tall you are, where you live, your interests, your political preference… Everything about you.
But of course they had their limitations to mine data. Either way, that was enough to get to the masses. The advantage they had back then was that there was no Internet, therefore no long tail. So if they were able to get their ads in the two or three main TV channels, they were in business.
The thing is… we were very dumb back then — at least we weren’t media savvy. So they could do and test whatever they wanted, and got away with it. Luckily, they didn’t have the tools to get too far.
Google changed the game.
In 1998 Google showed up. From the beginning you could tell that they were in it for the long haul. All the decisions they made were done it in a way that contributed to build an asset.
That, of course, made investors anxious. Let’s remember that Google was founded before the dotcom bubble, so in the moment it burst, investors were in a hurry to make quick money and recover from it. And they didn’t understand what Google was trying to achieve. But they were doing something brilliant: taking over most online advertising’s dollars.
In the last couple of decades we’ve not only gotten more ways to capture more raw data, but we’ve gone to the next level: predictability of that data. That’s how Google got so powerful.
Do you know how Google makes billions of dollars every month in profit?
They place little ads next to the search results. And they price those ads by the click, running an auction to see how much the click costs.
This is unbelievably brilliant.
Let’s say you want to run some ads to sell sneakers. For every pair you sell you make 40 bucks in profit. So, here, how much would you be willing to pay for these ads? You don’t want to go further than $39.99. You’re willing to go to the point of almost not making any money so that your competitor doesn’t get the order.
So, now, you try to figure out how much the click costs. You don’t want to pay $39.99, because getting a click doesn’t mean you’re gonna get an order. You do a little math, see how many clicks a person needs to go through to make an order, then you calculate your churn rate (that’s the number of people who get out). And finally you discover that the maximum amount of money you’re willing to pay for a click is, let’s say, four bucks.
Now your competitor figures out she’s willing to pay three bucks per click. So what happens is that you win the auction, but only by paying more than $3 per click.
Okay. Who’s taking all the profit off the table? Google.
You get to keep paying more and more for those ads, because it’s worth it for you.
This is brilliant! They’re taking all the profit off the table. A profit that once went to advertisers (or clients just saved that money).
Now, do you know how they can make more money? If advertisers are more willing to pay extra for those ads — which seems obvious. And that’s where precise targeting makes all the difference.
This is how Google and Facebook have gotten so powerful. By building predictability around the data they gather.
There’s a lot of money on the table, so they’re willing to violate privacy to the extreme and milk the hell out of every piece of data they can from us.
This is fine as long as you’re playing Monopoly with your friends. But this is not okay when you’re playing monopoly in real life — with real people.
Consumers need to know how they’re being sold. They should know the dangers of their digital footprints, and how it’s being used to sell them out.
Mark Bartholomew makes a good point in his book Adcreep:
“What is the psychological toll of living in a world where all of us have to assume constant supervision? Surveillance, regardless of its ultimate purpose, ultimately treats its targets as objects, dehumanizing them and corroding the trust needed for cooperative social enterprise. The same technologies developed by advertisers to surveil consumers are subsequently used by the government to track its citizens and by individuals to spy on each other.”
The next phase though, is more juicy and profitable, and that’s when they can get to the ultimate level of predictability with great accuracy: Mining biodata.
Taking predictability to the next level: Biodata.
We’ve seen how ads can be used to change entire cultures and dictate the trajectory of society.
What happens with our digital footprint is that it increases its value exponentially over time. So today there’s enough data on you already. You don’t have to provide any more data. Advertisers can influence your decision making with the data they have on you. Let me put a privacy inequality example without the use of biometric data.
In the book Weapons of Math Destruction, Cathy O’Neil talks about how a university can target poor people and sell them the dream of a better life, while getting them into serious debt:
“While spending more than $50 million on Google ads alone, the University of Phoenix targeted poor people with the bait of upward mobility. Its come-on carried the underlying criticism that the struggling classes weren’t doing enough to improve their lives. And it worked. Between 2004 and 2014, for-profit enrollment tripled, and the industry now accounts for 11 percent of the country’s college and university students.”
It’s worth emphasizing this again: $50 million on Google ads alone!
This is far from the early promise of the Internet back in the day as an equalizing tool. In those days nobody knew who you were, but today you’re categorized, ranked and classified through patterns and preferences. And this is exactly the type of profiling that attracts predators looking to sell overpriced promises.
This scenario will get worse and worse when media platforms use biometric data to target people with greater accuracy. Conversion rates will go through the roof, and advertisers will be willing to pay more per click (or whatever form it takes in the future). And who’s gonna make more dough out of this? The platform that mines biodata and builds intelligence around it.
Well, it turns out no new technology is needed to put this in motion. During the last couple of decades we’ve made some serious progress understanding the brain (we’ve still got no clue, but we’re way better than we were 20 years ago). And we’ve applied this knowledge to understanding why we behave the way we do — with a big emphasis in behavioral economics. (Read Dan Ariely’s books in you’re interested in this topic.)
We know that when people are about to buy something, their heartbeat goes up. We know that a good amount of sleep makes people less anxious, which could lead people to buy more with the right nudge. We know a lot of things.
- Sleep Tracking and Sleep Apnea Detection
- Pulse oximetry
- Respiration rate
- Blood Pressure
- Sunburn/UV Detector
- Parkinson’s Disease Diagnosis and Monitoring
- Glucose Monitoring
- Sensor and Data Challenges
It’s worth noticing that the Apple Watch has already sensors that are capable of detecting everything from this list.
But this is just the beginning. What will happen when our thoughts are out there in the open with brain-computer interfaces?
Today advertisers can do a lots of things. But tomorrow when there’s a mass adoption of devices that can capture biometric data, advertisers’ creepiness will get to a whole new level.
We’ll get to a point where they know, precisely, what you’re about to think before you’re even aware of the thought itself. And when you have such power, you can hack human emotions in order to get individuals to arrive to a thought, created by a corporation, and the individual will think she’s the one that came up with that thought.
“We do this to satisfy consumer preferences”
I can’t get my head around this. We know what’s going on. We know where this is going. We know the intentions of these new media oligopolies. Yet, we do nothing — or even worse, focus the debate on the wrong issue.
Everybody knows that online users don’t read the fine print. It’s humanly impossible. So why we focus on notice people and privacy-friendly defaults, when we could create mandatory rules for advertisers, delimiting what they can and can’t do?
Because right now, what’s happening is mind-blowing. Facebook even goes further by paying users ages 13 to 35 (violating GDPR law here) to install a VPN (virtual private network) that spies on them — so they can have access to the root network traffic, and decrypt and analyze at the molecular level what these people do with their phones.
The problem here is that we won’t be able to get out of this scenario easily. Even with laws like the ones imposed in Europe, there’s still a power imbalance that leads to manipulation and decision making influence.
In the end, it doesn’t matter the form, when you play with surveillance, you do it at the expense of those being watched.
And when you build a buggy law, they’re 1000 ways to get around it. And this keeps happening, and will keep happening, until we get serious.