Calculating Ad Performance Metrics in Real Time

Melissa Thorne
Zulily Tech Blog
Published in
3 min readFeb 1, 2018

Authors: Sergey Podlazov, Rahul Srivastava

Zulily is a flash sales company. We post a product on the site, and puff… it’s gone in 72 hours. Online ads for those products come and go just as fast, which doesn’t leave us much time to manually evaluate the performance of the ads and take corrective actions if needed. To optimize our ad spend, we need to know in real-time how each ad is doing, and this is exactly what we engineered.

While we track multiple metrics to measure impact of an ad, I am going to focus on one that provides a good representation of the system architecture. This is an engineering blog after all!

The metric in question is Cost per Total Activation, or CpTA in short. The formula for the metric is this: divide the total cost of the ad by the number of customer activations. We call the numerator in this formula “spend” and refer to the denominator as an “activation”. For example, if an ad costs Zulily $100 between midnight and 15:45 PST on January 31 and results in 20 activations, the CpTA for this ad as of 15:45 PST is $100/20 = $5.

Here’s how Zulily collects this metric in real-time. For the sake of simplicity, I will skip archiving processes that are sprinkled on top the architecture below.

The source of the spend for the metric is an advertiser API, e.g. Facebook. We’ve implemented a Spend Producer (in reference to the Producer-Consumer model) that queries the API every 15 minutes for live ads and pushes the spend into a MongoDB. Each spend record has a tracking code that uniquely identifies the ad.

The source for the activations is a Kafka stream of purchase orders that customers place with Zulily. We consume these orders and throw them into an AWS Kinesis stream. This gives us the ability to process and archive the orders without causing an extra strain on Kafka. It’s important to note that relevant orders also have the ad’s tracking code, just like the spend. That’s the link that glues spend and activations together.

The Activation Evaluator application examines each purchase and determines if the purchase is an activation. To do that, it looks up the previous purchase in a MongoDB collection for the customer Id on the purchase order. If the most recent transaction is non-existent or older than X days, the purchase is an activation. The Activation Evaluator updates the customer record with the date of the new purchase. To make sure that we don’t drop any data if the Activation Evaluator runs into issues, we don’t move the checkpoint in the Kinesis stream until the write to Mongo is confirmed.

The Activation Evaluator sends evaluated purchases into another Kinesis stream. Chaining up Kinesis stream is a pretty common pattern for AWS applications, as it allows for the separation of concern and makes the whole system more resilient to failure of individual components.

The Activation Calculator reads the evaluated purchases from the second Kinesis stream and captures them in Mongo. We index the data by tracking code and timestamp, and voila, a simple count() will return the number of activations for a specified period.

The last step in the process is to take the Spend and divide it by the activations. Done.

With this architecture, Zulily measures a key advertising performance metric every 15 minutes and uses it to pause poorly-performing ads. The metric also serves as an input for various Machine Learning models, but more on those in a future blog post… Stay tuned!!

Originally published at https://zulily-tech.com on February 1, 2018.

--

--