Displaying Billions of Ads Per Week on The Open Web
Behind The Scenes
The goal of this article is to shed light on what the Format feature team does at Teads, and what our challenges and responsibilities are. First, here is the mission of the team:
An ad experience refers to everything that happens when you read an article on a website with our tag on it and you see one of our ads.
- finding a place for the ad on the page,
- requesting an ad tailored for you in this context,
- and checking if you actually watch it.
Ads are delivered on millions of contexts, which are a combination of thousands of different websites on tens of thousands of different devices, and either directly in the browser (Safari, Chrome, Edge…) or in a Web view in an app.
Ads must work seamlessly regardless of the context: our goal is that you never realize that there is an ad loading; we want to display ads, not spinners or blank spaces, or worse break the site on which our script is integrated.
We call “Teads Format”, our piece of client-side technology to run advertising creatives into editorial content. As shown below:
This Framework includes several components:
- A generic and simple tag that is embedded into our partners’ websites.
- A script that is called by the tag to detect if and where we can create a placement to run an ad on a given page.
- A player, responsible for loading and playing the ad in the identified placement as well as firing tracking events.
To constantly adapt this piece to new features from major browsers like ITP 2.0, Heavy Ad Interventions, or the removal of cookies on top of our own product innovations, we are always on the lookout for new recruits.
If you’re a developer who wants to see how diverse the challenges the Web has to offer outside of a React app on Chrome, contact me!
In this article, we will dive into what it means to work with us by taking a hypothetical but concrete use case and walk you through the whole feature development process.
For the sake of the example, imagine as a requirement that we would like to add a new button to close the ad and let the Publisher decide if it should be enabled on his pages.
To properly do this at our scale we have to go through three phases and their respective challenges:
- Build: Actual implementation of the feature on our stack
- Execution: Making sure it will work as expected
- Impact Analysis: Measuring the impact and witnessing alignment with the initial product or technical expectations
Step 1: Include this logic into our Format
This code is developed in TypeScript, and one of the specificities is that, for isolation purposes, it does not contain any third-party libraries, any given piece of code in our asset is 100% produced in-house and tested by Teads engineers.
That is not by choice but out of necessity, for example, we already encountered issues with websites that included libraries that replaced the standard API of common Objects in the browser, like
This new feature has to precisely follow the SSP server’s configuration and act accordingly in a reliable manner in all possible contexts (more on that later).
Step 2: Retrieve the option during delivery
Following up on our feature development, we need to be able to use this new information (should I show a close button or not) when our script is executed on the publisher’s pages. This requires updating caches components in our SSP (Supply Side Platform) to let this information flow to our Format script. Depending on this information, the Format will be able to adapt when it receives an ad to be displayed.
As a newcomer, you would get support from owning teams. For instance, if someone is not familiar with the Scala programming language which is omnipresent in our back-end systems, we will set up training sessions and do pair programming to get up to speed.
On a side note, each PR is extensively tested following state-of-the-art practices (unit, integration and functional tests).
Step 3: Add this feature to our back-office interface for Publishers
Now that this minimal feature works, we can have a look at how to give our publishers access to it.
We have a dedicated web application, Teads for Publishers (TFP), that is used by our publisher partners to define how we are integrated on their pages and to let them monitor ad delivery. The next step for our new feature would be to update the TFP UI to add this close button option and let the user activate or deactivate it.
Now that we are able to run this feature end-to-end and that it is automatically tested for regressions on the most prominent contexts using functional testing on a cloud-based test platform, we need to ensure that it works fine in the long tail of contexts we are addressing. As a reminder, our script is executed as a third party on our publisher partner’s pages and we have to make sure that we are not overflowing out of our placement or impacting the rest of the page as well as not being impacted by it.
Our execution environment is defined by the following:
- A device, either desktop or mobile, and the associated operating system.
- A browser, despite the convergence in the browser engine field we still have to deal with a variety of versions and a fast-paced release timeline. If a new browser version changes the way the DOM or the CSS is handled, it can break the UX.
- A website, today our technology is live on thousands of sites, and the integration differs from one site to the other. We also support specific web publishing frameworks like AMP or sandboxing APIs like SafeFrame, a managed API-enabled iframe that opens a line of communication between the publisher page and the ad it contains.
- An internet connection with varying latency and bandwidth. Many readers browse articles while commuting and can have erratic connections, we need to support these cases as well.
On any given day, when taking into account the various versions of these components we are looking at 90 million possible combinations. In that context, we have to be knowledgeable about the main differences between platforms.
Each new browser version doesn’t always bring breaking changes, however, they frequently introduce new features that can be of interest to us. If we want to make sure that our script is able to run, we have to carefully follow the browsers’ developments.
One of the biggest challenges we face is that we usually cannot use the same API everywhere as it might not be supported right from the start in all browsers. On top of that, even when the same API or feature is widely implemented, there is still a risk that it will not behave the same in all situations. Hence, we have to measure it precisely.
Impact Analysis Challenge
Our Format is also one of the main Analytics data providers. We have a responsibility for these critical activity events (
progress events, etc.) that have to be triggered consistently in all contexts. This leads us to our Impact Analysis challenge and how important A/B testing is for us.
Now that we made sure that our new feature is executed correctly, we have to assess its impact on a set of business metrics like the completion of visibility rate.
Indeed, we cannot simply put this in production for the two billion readers we see every month without proper testing and without knowing the exact impacts of the feature.
In our example, we will want to analyze if users are actually using this new close button, although by design they can simply skip the ad by scrolling down the page.
We will alternatively deploy this new feature on a small fraction of our audience and run A/B tests to properly measure the impact on about 20 KPIs. Maybe we will observe that we need to change the way the feature is implemented and iterate in a Build — Measure — Learn feedback loop until the A/B test proves to be aligned with our initial impact expectations.
Most features require the creation of a dedicated dashboard to be able to perform this analysis and setup alert if need be.
In that case, we need to be able to explore data using our cloud data warehouse solution for example. Great, we now have released this first feature to two billion users and it’s only the beginning of its life!
Other challenges worth sharing
We have discussed a hypothetical feature addition but on top of a diverse set of responsibilities, we are also in charge of creating new products, web apps, and tools from scratch.
For example, we are strongly investing in a feedback loop that scans creatives for certain properties like response size, request counts or duration that can pause the delivery of ads that are not following our guidelines in order to ensure quality at scale.
Being able to quickly respond to a fast-paced environment also requires the appropriate strategy. Among other things, we have for a long time embraced a “Fast Lane” in our team’s organization. The Fast lane handles small requests, features, or bugs that need to be solved quickly. The main goal is to protect teammates working on strategic roadmap items from context switching.
As previously mentioned, our first line of defense to preventing production issues involves a wide range of automated testing ranging from Unit tests to Visual screenshot comparisons. We also have automated various performance impact measurements. This allows us to constantly be aware of a specific code change impact on performance, ranging from asset size to Core Web Vitals scoring.
The Format team offers a wide range of technical challenges from back-end to front-end that we need to solve at scale. More often than not, we cannot find ready-to-use solutions on the internet which leaves room for creativity as well.
If you are interested in working on these stimulating topics, we are constantly looking for new recruits to join our team.
Acknowledgments. I’d like to sincerely thank Benjamin Davy as well as the whole Format team for their meaningful contribution and careful review of this article and above all for being an awesome team to work and go through a pandemic with.