How to exclude non-performing ad inventory for better targeting

Niveditha P
MiQ Tech and Analytics
6 min readJan 3, 2023

Divyaprabha M, Data Scientist I, MiQ & Niveditha P, Data Scientist I, MiQ

Real-time bidding, i.e. programmatic buying, is the fastest-growing area in digital advertising. It allows advertisers to access any ad space connected to the internet, also known as inventory, through programmatic auctions. On a single day, an advertiser may partake in millions of such auctions, spending thousands of dollars on multiple ad slots. To maximize resources, it’s important that ads are displayed to relevant audiences.

The ideal solution in this scenario is to exclude non-performing inventories so that advertisers can focus on their target audiences, which are more likely to convert. The problem is that digital ad inventory is virtually limitless and dynamic which makes identifying current non-performing inventories challenging. One solution is to filter inventory by features e.g. day of the week, time of day, site domain, device type. Through this, you can exclude feature values that target audiences least likely to convert from your inventory selection. However, with the number of ads delivered every day, it’s impossible for humans to make these decisions at the required scale. This is why we decided on a data-driven and automated approach to feature exclusions.

Goal Setting

We wanted to identify all the feature values that were delivering (spending) the most but contributing the least to the KPI, and recommend exclusions based on the campaign’s delivery and target goals.

For example, if a specific day e.g. Sunday (feature value) within the week (feature) is spending a lot and not positively contributing to the KPI, it will be excluded from targeting.

Adding to our toolbox

Our automated exclusion tool is one of the newest of the many tools that we have built to improve ad selection and ultimately drive efficiency for our advertisers at MiQ. It goes beyond simply excluding the worst offending features and instead uses a smart feedback control mechanism to learn from live campaigns to automatically and periodically update recommended exclusions.

Let’s first understand the dataset that feeds into the exclusion logic, before exploring the solution.

Datasets that feed exclusion logic

We primarily use aggregated data from the log-level feed containing details about all the auctions that MiQ has won. The final consolidated report has two parts: contextual feature combinations and metrics. Selected contextual features include site domain, city, time of day, etc. Metrics like impressions, clicks, conversions, CTR and CPM allow us to understand each feature combination’s delivery and performance.

Here’s how the consolidated sample dataset looks:

The data science behind MiQ’s Line Item Exclusion

The first step was building an exhaustive list of feature values eligible for exclusion from all the contextual features. We named this list the ‘exclusion pool’.

Performance and delivery are lower and upper thresholds respectively, meaning any domains with CTR under 0.3 and impressions over 10 will be flagged.

At this stage, the exclusion pool contains thousands of feature values. Removing every value from the pool would result in a sharp reduction in delivery. This is something we want to avoid as it would impact the audience reach of the campaign, which in turn would negatively impact performance itself. So through the following steps we apply a set of filters to automatically cherry-pick feature values again for exclusion from the pool that we’ve just created.

The first round of filtering selects one feature for each targeting strategy with the maximum calculated uplift on the KPIs.

Then the next set of filters uses risk factor thresholds which are limits on delivery below which the campaign will no longer be on track to meet its budget targets.

Maintaining long-term efficiency

Once the first application is complete, we need to refresh the recommended exclusions regularly. In a worst-case scenario, with multiple iterative exclusions with the same risk factor threshold, we might exhaust all the targeting inventories and end up not meeting the campaign’s delivery requirements. To address this issue, we developed a feedback control mechanism similar to the early stopping methodology used in machine learning. The idea is to stop applying exclusions and backtrack to the previous best-performing state if the monitored KPI drops to a value less than the minimum threshold or stops improving. Otherwise, we continue to apply exclusions by gradually reducing the risk factor threshold on each run to the point where we can no longer apply the exclusion without losing more-than-affordable delivery. This way we can constantly assess the effectiveness of the previous runs and make sure that the delivery and performance of the campaigns are within the expected range.

Results and what’s next

When testing this approach on 50 live campaigns, we observed:

  • Average 10% reduction in CPM across all campaigns.
  • Average 22% increase in CTR, and a 10% increase in VCR.

Eliminating these non-performing inventories at regular intervals and keeping the ad-space clean has enabled us to get rid of unnecessary expenses. This has allowed our traders to explore new inventories or spend more on impressions which are more likely to hit our client’s KPIs. In the current version, we define ad inventories using a single contextual feature. In the future, we will be enhancing our solution to factor in a combination of features. This will enable us to target the audience more granularly by excluding the worst-performing feature combinations. As such, we’ll be able to control the delivery drop as well as improve the KPIs.

Considering the amount of data and inventory available to us, the precision and focus we have when it comes to data science ensures our advertisers are getting the most from their budgets. Additionally, our unparalleled view of inventory and data driven by the partner agnosticism at MiQ enables us to get the best possible insight into line item performance. While this is only the start, we’re constantly updating and altering solutions to improve systems and also target the right audience at the right time.

Niveditha and Divya are data scientists working in MiQ’s Bengaluru office. Outside of work, Niveditha enjoys listening to audiobooks and visiting new cities to sample local cuisines. Divya is a Potterhead and proud of it! She also loves reading and sketching portraits.

--

--