Making Online Advertising Less Obnoxious with Frequency Management

Rolland He
Quantcast Engineering
6 min readJun 28, 2019

--

According to a consumer survey published by Hubspot in 2016, “83% of respondents agree that not all ads are bad, but they want to filter out the really obnoxious ones.” At Quantcast, we know that when digital advertising is obnoxious, everyone loses — marketers, publishers, and consumers. So we’re constantly trying to make ads more useful and less annoying.

Lately, we’ve been tackling one category of obnoxious ad: the repeated banner ad (you know, that same ad that seems to appear on every single site you visit). We wanted to help our advertisers set a desired frequency for their campaigns, making them frequent enough to be memorable, but not so frequent as to waste budget or drive away potential customers. Even though it seems straightforward — just track the number of ads that have been shown to each user, right? — frequency management presents a number of complexities in a real-time bidding environment.

Goal

We solve a multi-objective optimization problem when we make bids on ad opportunities (this happens millions of times a second), and for most of the goals we’re trying to optimize, we’re aiming at a single number, like our advertiser’s target budget or audience composition. For frequency management, the natural corollary would be an average frequency: # of ads delivered (AKA impressions) / # of users who saw ads for the campaign.

But the desired average could be maintained even if a few users get bombarded with ads while others only see a single ad. Our goal was to have most users see the ads enough times to get a memorable impression of the product, but not so many times that they become annoyed. So we needed a way to control for both the average frequency and the frequency distribution.

Finally, consider the difference between a user seeing an ad once a day for five days vs once a minute for five minutes. Even if the frequency is identical, a change in the cadence — the time interval in between repeating the ad — could mean the difference between a good and a bad experience. Therefore, our solution also needed to maintain the right cadence.

Implementation

Whether or not we show an ad on a given page depends on whether we win the online auction for that ad opportunity. In the backend, we generate billions of potential bids per second, trying to make the optimal one that will push us closer to our advertisers’ various (and often competing) campaign goals. Introducing a new goal of average frequency required updating and re-tuning this complex system; additionally, we needed to implement some new methods of control for the frequency distribution and the cadence.

Parts of our bidding system use proportional-integral-derivative (PID) control, a type of closed feedback loop that looks at the error between where we are and where we want to be, and sums a proportional, integral, and derivative term to update our bidding behaviors in a way that keeps us on track towards the advertisers’ goals. When we introduced a new campaign goal targeting an average frequency (impressions / cookies), we updated our PID control system to adjust our bidding behavior so that we favor ad opportunities that help us hit that desired average. After some tuning, here’s an example of how our PID system did at honing in on the average and maintaining it over a month-long campaign:

But as we mentioned before, setting an average frequency wasn’t enough to achieve our goals; we had to make sure we were also seeing a frequency distribution that resembled a bell curve. So in addition to keeping the average frequency in control, we implemented a parallel control loop which closely monitors the frequency distribution and appropriately adjusts the maximum frequency of impressions we can show any individual cookie. The basic mechanism is that if we’re undershooting the target frequency, we’ll increase the frequency cap, and if we’re overshooting, we’ll decrease it. We capped the max frequency as well, setting a firm limit on how many repeat ads can be shown to a user.

Finally, to serve ads at the appropriate cadence, we introduced some logic to prevent bombarding a single cookie with multiple impressions in a short span of time. We use a factor that devalues cookies that we recently served to (in the past several hours), and this devaluation effect decays exponentially over time.

Results

After implementing our new frequency goal in our multi-goal optimization system, we were able to achieve a much better frequency distribution than before, as shown in this campaign:

Comparison of our frequency distributions before and after we implemented our frequency controller

In this example, we’ve greatly mitigated the repeated banner ad problem. Before, one third of the total impressions in the campaign were shown to users who saw more than 50 impressions (shown by the gray bars); additionally, a nontrivial number of users (cookies) were served the same ad more than 20 times (shown by the red bars). After implementing frequency control, we see these extremes mostly disappear. Another positive outcome is that frequency management should make ads more effective for marketers. Before, 45% of users would only see the ad once; now, many more users see the ad enough times for it to have an impact.

Challenges

While frequency management has the potential to be a big win for consumers, marketers, and publishers, there are some inherent difficulties to solving it entirely.

For one, the way our system identifies a unique user is through an anonymized cookie. But cookies are a highly imperfect representation of unique users, because they have a notorious churn rate; they constantly get deleted and replaced. Cookie churn is an unavoidable challenge affecting the ad tech industry as a whole, but it’s particularly problematic for frequency management, where unique users is a key metric.

Additionally, implementing frequency management started to push the limits of our PID control system. PID control is robust and fast, but it was designed for simple single-input/single-output (SISO) systems. The more variables we add, and the more correlations that spring up between them, the less accurate and stable PID becomes. In implementing frequency management and other additional goals, we’ve seen PID experience problems with converging on the optimal solution even after extensive tuning. As such, we’ve turned to other methods of control, like Model Predictive Control (MPC).

MPC is a term encompassing a large number of different ideas; broadly, it means using historical data to build a model that predicts how a process will perform given a set of input signals, and updating the model at each time step with information from the previous time step. Compared to PID, MPC is able to handle nonlinear dynamics in the control problem and can get to equilibrium faster. On the other hand, MPC is much more computationally and memory intensive, more laborious to implement, and requires training and maintaining a potentially complex model.

___________________________________________________________________

In tackling frequency management, we’ve made a dent in the number of obnoxious ads online, improving the outcomes for advertisers, publishers, and end users. Frequency management is a complex problem that has had implications for our entire control system, and we aren’t done solving it. Want to help make online advertising more useful and less annoying? Check out some of Quantcast’s open roles here.

--

--