Assessing the impacts of the Privacy Sandbox piece by piece #1: Bring the noise

Pl Mrcy
Criteo R&D Blog
Published in
6 min readSep 27, 2021

--

Source: www.shutterstock.com

Tl;DR:

  • Criteo’s “privacy-preserving ML competition” hosted by ad-KDD is now over. It gathered more than 150 teams from many companies who worked to train ML models on datasets inspired by aggregate reporting proposals from Chrome’s Privacy Sandbox, that introduce concepts such as incomplete access to data, noise insertion, and time-delayed feedback.
  • All well-performing proposals critically require some granular event-level data for training, which were still available in the challenge for accessibility reasons but are not available within Chrome’s current APIs.
  • The first results show that the proposed framework would still have at least a double-digit impact on advertiser spend and performance, and therefore publisher revenue, despite the usage of granular data.
  • Only a very specific aspect of one of the multiple APIs was tested. Comparing the Privacy Sandbox to a car, the challenge was only an approach to testing a concept version of the steering wheel in isolation from the rest. These results shine a light on a narrow aspect of the overall proposal and whilst this is a good start, many other parts and the system as a whole remains to be tested.
  • The below post explains the impact of the Privacy Sandbox for all marketers and is not specific to Criteo’s business outlook.

Introduction

The Privacy Sandbox claims to create a viable framework for advertising across the open web that respects the users’ privacy while enabling marketers to continue providing ad-funded access to digital properties in a web without third-party cookies. Several APIs covering different aspects of the digital advertising process compose the sandbox. While the reporting APIs have received less attention from the media and at the W3C, we think they are instrumental pieces for the whole edifice to stand.

Chrome proposed introducing a revolution on the reporting side of advertising based on a mathematical framework called differential privacy. This framework gives mathematical plausible deniability guarantees by obfuscating data and adding numerical noise instead of providing advertisers, and publishers access to granular information about every display, on which advertisers and publishers currently rely to value and fund people’s access to sites. Differential privacy is supposed to preserve the utility of the data, meaning that marketers should still be able to use it for our current usages.

To assess the impact of such a change on advertisers and publisher utility, Criteo launched a public Machine Learning challenge in the context of AdKDD 2021. More specifically, the goal was to appraise the impact of such data on the training of relevant machine learning models for ad placement and raise awareness on the topic. We proposed to explore methods to learn click and sales prediction models on a dataset donated by Criteo, inspired by Chrome’s reporting proposals (and with guidance from their designers). We only considered the small use case of optimization in isolation. This challenge does not try to capture the other restrictions and complexities introduced by the privacy sandbox proposals.

Participation was high, raising hundreds of submissions with innovative solutions from more than 150 talented teams. It proves the wide interest in this topic and the general concerns about optimization in this future constrained environment.

Results and how it translates into business impact

There were two parts to the challenge, predicting clicks and sales. In both cases, the models were evaluated based on the difference between a model learnt with the currently available granular data (the “oracle”) and a model learnt with the data available for the challenge, measured using a technical metric, called LLH or loglikelihood. Please note that the “Oracle” model (a.k.a. skyline) is not the perfect model, nor is it Criteo’s model in production (it includes much fewer features than our real model in production for instance). It is simply a good model learnt on granular data.

For both clicks and sales, the winners managed to get these technical metrics to -2.5% below the baseline model. We estimate that this -2.5% approximately translates into a -20% decrease in advertiser spend at constant ROI, a key business metric. This estimate is based on historical data, assumes iso-competition — this estimate does not take into account the macro changes that will happen across the industry in reaction to the new set of constraints — and vary from at least -10% up to -30% impact.

Another key result of the challenge is that for the three winning submissions, the performance impairment measured on small advertisers clicks is two times worse when compared to big advertisers, and similarly impacts smaller publishers more than their larger rivals. This means the proposals might increase distortions between small and big players, both on advertising and on publishing sides.

These results must be put into perspective in several regards:

Extra granular datasets not available in Chrome’s APIs

The dataset proposed for the challenge represents only very imperfectly what the data would look like in the privacy sandbox. Indeed, some features available in the data set can only be pre-computed at the user level and will not be available in production in the Privacy Sandbox. Much more importantly, the training data set contains a small share (1 row over 1,000) of clear, granular traffic as side information. This granular data was instrumental for all participants to reach the reported level of performance. No such granular data is available in the current specifications, and we only provided it to have an accessible challenge, without expecting such extensive usage.

The results are thus an inflated upper bound of what the results would be with the real Privacy Sandbox reporting.

This impacts only represents a tiny piece of the Privacy Sandbox

The Privacy Sandbox is a complex framework that adds many constraints to the advertising process; this challenge addresses only one of them: differential private reporting. Thus the results coming out of the challenge are not be understood as estimates of the overall impact of the Privacy Sandbox on performance. Furthermore, restricting the scope to this particular aspect of the privacy sandbox, the real problem that advertisers will face was simplified for the sake of transforming it into an academic exercise.

A non-exhaustive list of constraints on performance added by the Privacy Sandbox:

All these constraints must be tested independently to understand their individual impact but we won’t be able to assess their combined impact without testing whole together. The sum of the individual impacts will underestimate the impact of the total.

Conclusion

The results obtained by the best team during this open exercise show that the impact of differential private reporting on the ability to learn accurate Machine Learning models will be very significant. A business impact will directly follow and will lead to a decrease in revenue not only for the advertisers but also for publishers. These figures are the best the 150+ participants, from academia and the ad tech industry, could get out of the challenge, bringing their expertise and innovative solutions for everyone to see. We will publish more technical and detailed results in other posts to come.

We also want to pursue further this way to collectively measure up against the various aspects of the Privacy Sandbox. It is important to have clear and reproducible data on each part of the Privacy Sandbox, and use the results to challenge them — or validate them. Thus, we may organize other challenges of this nature in the future.

--

--