Why “Real-Time Bidding” is a Big Bust

Technical learnings from past wasted RTB development

Rob Leathern
7 min readOct 20, 2015

Real-time bidding (RTB) is an advertising technology that has been around since 2009. My former company (sold in 2013) had a very very small team of engineers that built one of the early real-time bidding systems. In fact, for a long time our RTB effort was just one brilliant engineer who also happened to be one of my best friends.

Update: I failed to include the following statistic when I originally published this essay, which helps make my point. Partially because of the overly-complex nature of this advertising technology, the majority (55%) of ‘programmatic’ advertiser revenue is flowing to the ad tech intermediaries and NOT the owners of the inventory! Nuts. [From the IAB, 7/2015]

I’ve known Jamie McCrindle since we became friends at Pretoria Boys’ High School in South Africa. He’s one of the best software engineers I know. My company (founded with David Li in 2008) already had our hands full building a self-service UI that integrated with Yahoo’s Right Media Exchange and Google Adwords. When I found out about RTB and the engineering challenges it entailed, I knew we needed extra help and got Jamie to help us build this as a part-time consultant — working from London to interface with our San Francisco team at some very odd hours. Eventually he joined our fledgling startup full time (still remote!).

Early on in 2009, I was trying to figure out how much building an RTB system would cost and published this. In the article I think I was right when I said:

RTB could cynically be seen as a way for ad exchanges/hubs to outsource their decisioning and infrastructure costs to others.

I probably overstated the per-impression costs in that article, but Jamie and I soon learned a lot more about costs (direct and opportunity costs) as we started to build.

Little standardization

The proliferation of ad exchanges and the lack of standardization made building a normalized real time bidding engine difficult. In theory, openrtb is meant to solve that but it’s still not used by many of the larger ad exchanges (Google, Facebook etc.) and it didn’t exist at the time we were writing our bidding engine. Getting set up on each new ad exchange would take weeks if not months of back and forth with their support team. The further away the support person was from the engineering team, the longer it would take. The Yahoo / Right Media Exchange was by far the worst offender in terms of hobbling their support team.

Handling a lot of traffic

At one point we were handling 4 billion real time bidding requests per day, with peak of traffic at hundreds of thousands of requests per second. The round trip for any bid request had to be under 120 milliseconds. In practice we aimed for an average of about 20 milliseconds from the request hitting our system to returning a response. This led to several terabytes of data per day, that often included userIDs, IP addresses, page URLs and more.

That kind of load is a great way test both hardware and software. We started out on Linode using Apache as a reverse proxy to a bidder written in Java using the Jetty servlet engine. Apache fell over quite quickly, so we replaced it with Nginx which fared a little better for a while but also caused enough latency for us to replace it with HAProxy. Eventually, even the small amount of latency that HAProxy was adding couldn’t be justified so we ended up having the ad exchange hit our bidder directly. Jetty was the next to fall, being supplanted by Netty, a lower level Java based networking library. Finally, the operating system itself decided it was being hit by a denial of service attack and just started dropping packets. Linode wasn’t providing its ‘Node Balancers’ back then, so we decided to move over to AWS and the Elastic Load Balancer to distribute our load between multiple redundant bidders. Moving to AWS was something of an unusual step back then as most of the display service providers (DSPs) were choosing to deploy on their own bare metal servers instead. It’s hard to know which would have been the better choice as what we lost of raw performance and hosting costs, we gained in flexibility and lower operations cost. With a little coaxing and ticket to AWS support to ‘warm up the ELB’, the Elastic Load Balancer did manage to handle all the traffic were receiving.

The tragedy was that despite paying to handle that many requests, us and many of the other DSPs often only ended up bidding on a relatively small percentage of all of the traffic.

Having to manage our own spend caps

A huge problem with real time bidding was that the exchange operators didn’t provide any budget capping / spend limits. Given how many impressions were flowing through the system and the complexity of distributed budget management it was entirely possible for a DSP to have an expensive Knight Capital-type glitch. Obviously the incentives mean that the exchange operators are unfortunately not particularly motivated to solve that problem, so when Facebook started FBX with budget controls in place, it came as something of a surprise.

Tracking a lot of users

For RTB to allow you to retarget users or use other data to find users to target, you have to have to synchronize a big database of user cookie identifiers you maintain on your servers with the user identifiers that each of the real-time bidding exchanges manages on their end.

This is a big reason why (as a consumer) you may see dozens of different providers launching pixel calls in your browser at any given time when you’re on a publisher website. In addition to the normal delivery of ads, each RTB ad call is potentially an opportunity for lots of other companies to match their user identifiers to multiple ad exchanges. In fact, a company called LiveRamp turned this into a big business where for $0.02 to $0.05 per thousand impressions/users would call your user matching pixels in other people’s (correlating user data is a big business: they were subsequently acquired by Acxiom in 2014 for $310 million). It’s another reason you may have no idea where and how often your digital identity is being shared.

We ended up tracking roughly 400 million different cookie ids (primarily in the US and UK). We started with MongoDB, which was fairly fashionable at the time but found, like FourSquare did, that it crashed hard when it ran out of memory. It turns out the trick to doing real time bidding affordably is managing degradation gracefully. For the bidder, this meant that for auctions that we knew we couldn’t process fast enough, just sending back a no-bid as fast as possible to keep the latency low. For the data store, graceful degradation meant expiring ‘old’ items just a little faster when memory got tight. MongoDB did not degrade gracefully. Fortunately, it turns out Redis does. That’s not to say Redis didn’t have its quirks: we managed to get an audience with antirez, the author of Redis, as a result of a bug that wiped out the entire database when writing to it too fast. Outside of that, Redis generally proved to be a fast and resilient data store for real time bidding.

But of course, none of this was necessary in the old ad exchange world, where all the user data was maintained by the ad exchange in their cookie, and you merely placed “orders” in their system.

Making a decision to stop doing RTB

We’d added two other senior developers to Jamie’s team to work on RTB integrations and maintenance. We built out an interface and did data integrations with a variety of companies including NeuStar and BlueKai/Datalogix. And yet, we simply didn’t have the scale of customer buying demand to cover the infrastructure costs of listening to billions of ad impressions per day.

Further problems dealing with fraud, brand safety, not knowing where the ad slots were on the page and even (crazily) the opportunity to be bidding against yourself, led us to conclude that RTB was not worth the costs in listening to the bid streams across all the major ad exchanges.

We ultimately decided that simplifying what we were doing and focusing on a smaller number of advertising API partners who had data and inventory bundled together (like Facebook and Twitter), instead of requiring a complex infrastructure to manage and bid on them was a far better way to scale. And scale we did as these companies grew their online/mobile ad businesses — we went from zero to well over $100,000,000 per year (not run rate!) of social media ads in under three years.

Can we please go back to direct publisher-advertiser relationships?

I’ve become convinced by all the fraud and waste I’ve seen the last few years in the digital ad business that we may have to go back to the way things were — before ad exchanges, ATDs (agency trading desks), SSPs (supply-side platforms), DSPs, and ad networks. This is what the online display world looks like now (except add to this diagram a few anti-fraud providers, viewability trackers, and who knows what else!?):

Source: IAB UK

Fraudulent impressions and malware arise from the multiple handoffs between these multiple intermediaries. Oh to return to a simpler time…

Source: IAB UK (https://www.youtube.com/watch?v=1C0n_9DOlwE)

--

--

Rob Leathern

Entrepreneur and product leader, prev at Google and Facebook: security, privacy, ads & integrity