Don’t know much about online advertising? Before reading this piece, read our previous essay, ‘Behavioural Advertising 101’.
You can’t watch a video or scroll through your favourite news publication today without seeing an ad or two. Nor, if you live in Europe, can you avoid pop ups asking you to accept cookies. These interruptions are annoying (up to two thirds of us think so), but ultimately, they’re better than paying for everything we see and use. Aren’t they?
The online ad industry claims that personalised ads make it possible for users to browse the internet for free and for publishers to earn money for creating content. Which sounds like a win-win situation, doesn’t it?
Except it’s not. In fact, the current state of online advertising creates huge risks to our personal privacy, deteriorates the quality of the media, and is subject to serious fraud which funds organised crime.
Here’s a (non-exhaustive) list of 10 problems with online advertising:
1. The illusion of consent
Adtech companies claim that users freely give their consent to share data for advertising purposes because they want to have ‘a personalised experience’ online. But what these companies call ‘consent’ is often not consent under EU law.
The pop-ups which we must now act on when we visit new websites are designed to be deceiving, and we are ‘nudged’ towards clicking the huge green button saying ‘yes, I agree’, as opposed to the ‘no’ option hidden a couple of clicks away, behind layers of legal jargon. Further, consent fatigue causes people to close the banner just to get rid of it as fast as possible. But guess what: In many cases clicking ‘X’ is also interpreted as consent. The law states that valid consent should be informed, but many companies don’t clearly explain how ad targeting works. And besides, given how complicated and opaque it is, is it actually possible for users to understand and accept the consequences of giving consent?
2. Intrusive profiling
The ad industry has developed all sorts of methods that make it possible to track people across the web. Cross-site tracking enables ad companies to compile data on what people are reading or watching on a variety of websites. Cross-device tracking makes it possible to paint an even more comprehensive picture of a person and their many social roles, for example by combining data from a work computer and personal smartphone. New invisible ways of tracking, such as browser fingerprinting, are aimed at circumventing limited availability of cookies. Data sharing is a norm — after an ad auction is finished, companies who took part in it exchange identifiers with each other in a process known as ‘cookie syncing’. This means that next time, they will know that a user who bbc.com recognizes as ‘User ABC’ is in fact the same person as ‘User 123’ in their database.
The idea behind all of these techniques is to create detailed online profiles of individual users in order to better target ads. These online profiles contain everything the user has ever read online, their IP addresses, precise locations, device information, interactions with content — basically everything that can be collected about someone’s online activity. A lot of this information is sensitive, usually able to be processed under GDPR with consent, but sometimes undeclared, and relates to health, sexual orientation, and political or religious views. As established by the British privacy watchdog ICO investigating the adtech industry, data collection is excessive and mindless — companies just take everything they can without assessing what data is actually necessary. In fact, Doteveryone suggest that 84% of the data collected about us is never used. Comforting… until you realise that machine learning will give advertisers the capabilities to use this collected data in the very near future.
3. Massive breach of security
Once a website sends data about a user to an ad exchange, users lose control over its further use by a potentially unlimited number of other actors. Intimate details about people are broadcast to hundreds, if not thousands of companies, billions of times per day.
There are no real safeguards, as companies tend to over-rely on contracts as ‘guarantees’ of security. For example, Google’s advertising guidelines say that only companies that win a given auction may keep data to enrich user profiles, but these are just contractual, not technical, measures. In fact, sources within the industry claim that some companies take part in ad auctions only to get access to people’s data, and without the intention to win. As the CEO of Tapad, a company that focuses on cross-device tracking, puts it: ‘In the ecosystem there is a general understanding that it’s in everyone’s advantage to share data’. This is a data breach that affects basically everyone that has ever used the Internet, and could be classified as quite possibly the largest data breach ever recorded.
4. No transparency
Not only do users have no access to their data, but they also lose sight of it. To start with, it is very difficult for an average user to identify all the companies who might have their data. Often, key identifiers used by data brokers to single out users and target ads are not even revealed to the people they concern. It is a ‘catch 22’ situation that cannot be reconciled with GDPR requirements (in particular the principle of transparency).
Even when you manage to find the right company and the identifier, companies come up with a variety of excuses: either that cookies identify a device and not a user (which is an absurd argument in the age of personal devices), or that categories used to create a particular digital profile are protected trade secrets. Although some companies choose to reveal the data, datasets are complicated and presented in a way that an average user would not understand, let alone identify potential abuses. Just look at what Privacy International got from one data broker — how would you make sense of this?
5. Potential for discrimination
Targeting systems are only as good as the data which fuels them, algorithms are only as good as the people who design them — and human beings and human society are intrinsically biased. The nature of behavioural targeting means that adverts are ‘optimised’ for those most likely to want to see them. This is sensible when applied to buyers of power tools for example, but falls down when placing job applications in a job market which under-indexes on women, people of colour, or other minorities. And it becomes incredibly problematic when certain characteristics such as race or location are used to exclude people from certain services. Interestingly, even if data is artificially balanced, social media algorithms will also seek to optimise the placement of those ads, meaning bias can be reinforced twice by the system.
A desire to ensure that advertising appears in ‘brand safe’ environments also renders certain types of content ‘unmonetisable’, depriving publications of the revenue they need to stay viable. Brands create ‘blocklists’ of sites and keywords they’d rather not appear next to, which can be effective in ensuring that adverts don’t inadvertently fund hate speech or climate denial. A plane company might block keywords such as ‘terror’, or ‘crash’ for example. However, keyword lists can, and do, also frequently block articles featuring words such as ‘lesbians’ or ‘muslims’, leading to major funding problems for publications for the LGBTQ+ and Islamic communities. British advertising body Outvertising estimates that 73% of safe LGBTQ+ content is excluded from funding in this way.
6. Broken by design and by default
The EU data protection law, GDPR, introduced the concept of privacy by design and by default. This requires companies to take privacy into account when designing, implementing and operating any technology which processes personal data. High privacy standards should be offered to users by default, which means that they do not have to do anything (such as change settings) to be protected to the highest possible degree.
Targeted advertising couldn’t be further from these principles — in fact it is broken by design and by default.
For example? Companies do not respect users’ explicit wishes not to be tracked; Google’s advertising settings for publishers send user data to all third parties by default; companies that sell contextual ads cannot opt out of receiving people’s personal data through bid requests; and there is no easy way for users to withdraw consent or access data that was collected about them.
7. Supporting an unhealthy Internet…
Advertising is essentially about monetising attention. The more people can see or click on an ad, the more advertisers are willing to pay. This dynamic is largely responsible for the rise of clickbait and cheap sensationalism, ultimately deteriorating the quality of media and public discourse. Fully automated and opaque bidding systems create strong incentives for hoax publishers to participate in auctions. This way, money spent on advertising by the world’s biggest brands ends up supporting extremist and fake news content. In addition, it is estimated that advertisers lost over $30bn globally in 2019 because of websites that use bots to fabricate clicks and views.
8. …and a number of middlemen, instead of legitimate media
The ad industry’s ultimate argument to discourage regulators from imposing stricter rules is that advertising supports publishers, allowing them to pay for content, and makes it possible for everyone, rich or poor, to access news. However, it has become clear that only a fraction of the money that is spent on personalised ads goes to publishers. In what is known as the ‘adtech tax’, advertising middlemen capture from 55% (industry data) to 70% (as demonstrated by The Guardian) of every dollar spent on an ad.
It’s not even the case that behavioural ads are so profitable that it “justifies” compromising privacy — new academic research goes against industry data and shows that personalised ads translate to only 4% more revenue for publishers, as opposed to contextual ads. Publishers may even be losing money when you take into consideration costs related to serving behavioural ads, e.g. GDPR compliance or infrastructure costs.
Publishers facing tough choices and waning revenue are forced to play by the rules set by bigger organisations. If they choose to reveal less information about their users, advertisers will simply buy elsewhere. For example, if publishers want to sell ads with the use of Google’s infrastructure, they have to accept that Google will use their monopolistic powers to their own benefit. Information about visitors of independent websites is used to make Google’s own profiles more detailed and better target ads on Google’s own services, such as YouTube.
9. A carbon footprint too big to be ignored
This vast amount of (unnecessary) data need to be processed and stored. And that requires energy. A lot of energy. A 2016 study evaluated the carbon footprint of online advertising at 60 mln metric tons, which constitutes 10% of total Internet infrastructure emissions, or as much as 60 mln flights between London and New York. That number has inevitably grown since then and will continue to grow, particularly with the increasing use of machine learning, which is hugely energy-intensive. In the times of climate emergency, we cannot simply ignore this aspect of online advertising, especially when there are absolutely no arguments that would justify such a mass collection and processing of data.
10. Part of a larger problem
This picture would be incomplete without looking more broadly at the online landscape and data economy.
Online advertising in its current form upholds business models based on surveillance, built on the premise that people and intimate information about them can be treated like a commodity.
The language used by the ad industry is the best illustration of that. Here people’s attention (‘impressions’) is sold at auctions, people are categorised into ‘segments’ which — as one company boasts — can be moulded with their ‘plasticine-like’ 92 mln cookies’, and ‘infinite data from infinite devices’ stands for intimate details about our lives.
Even though there are thousands of adtech companies, online advertising remains a duopoly of Google and Facebook. By some estimates, the two advertising giants control 84% of the global digital ad market — these ads are mostly sold on their services (such as Instagram or YouTube), but Google additionally regulates and operates ad exchanges on external websites. Even when third-party trackers or ad auctions are no longer legal, Google and Facebook will survive thanks to the amount of data they gather about people through their own platforms. This doesn’t mean that we should turn a blind eye to privacy violations happening on ad exchanges but makes it inevitable to embed this conversation in a broader discussion on our online future.
It’s time to talk about alternatives
Real-time bidding was born over a decade ago, in 2009. Bob Hoffman, an advertising veteran and author of many books on the state of the industry, calls it ‘the advertising’s decade of delusion’. 2009 was also almost a decade before the GDPR went into effect. It is obvious that this system has not been designed to protect privacy and human rights, but to exploit our data and influence our decisions. It is seriously broken, on many levels, and half-measures or cosmetic fixes can only address the symptoms but will not suffice to deal with the systemic problems.
In 2020, we’re at the tipping point. The advertising industry is under investigation by a dozen EU data protection authorities, and concerns are voiced more and more loudly even within the industry. It has never been more urgent to talk about alternatives that respect privacy and human rights, and contribute to a healthier media ecosystem. As civil society and regular Internet users we should be part of this discussion. We have too much to lose.
If you have concrete ideas on the future of online advertising or are a company developing privacy-friendly solutions, please get in touch! (karolina.iwanska [at] panoptykon.org or via Twitter)
This work will be followed by recommendations for the industry and policymakers in the coming months, as part of my Mozilla Policy Fellowship.
I will be speaking about this at the CPDP conference in Brussels during a panel ‘Is ethical adtech possible? Navigating GDPR enforcement challenges in real-time bidding’, taking place on 24 January 2020 at 11:45.