How We Achieved A Significant Business Impact by Harnessing AutoML — Part I

Assaf Klein
Outbrain Engineering
4 min readOct 22, 2020

--

In the past year, we have automated the Outbrain machine learning ecosystem to support our key machine learning operations; specifically, click through rate prediction (CTR) and conversion rate prediction (CVR). By leveraging these tools we were able to dramatically increase our key business KPIs with a relatively small team.

Photo by Craig Sybert on Unsplash

Machine Learning Automation

Producing a successful enterprise-scale machine learning product is a huge challenge and requires specialized skill sets. Typically these projects require data scientists for crafting the model, data engineers for establishing data pipelines, and software engineers for maintaining scalable and reliable serving in production. In most cases, a high-performance machine learning module is essential to achieve business goals. Machine learning module development is inherently different from traditional software engineering projects. It is much more iterative and less predictable. The most challenging aspects of ML development are how to gather the data and how to effectively iterate through different models until business performance needs are met. These challenges become even trickier if you consider the typical industry settings:

  • Data Freshness — high performing systems require ML models to be constantly updated with fresh data
  • Constant Improvement — model performance is expected to be a positive monotonic function, there is an ongoing mission of improving modeling techniques, improving features, etc.
  • A/B-Testing — the final and most reliable validation of a new model is to perform an A/B test in production
  • High SLA — the model serving layer should be scalable and capable of serving hundreds of millions of requests a day at low latency

We realized that automating the process of crafting models and acquiring reliable performance business metrics is the key to building a high-quality ML system. More specifically in less than a year, by leveraging the Outbrain AutoML suite we improved our key business KPI by a few tens of percent.

Some reasons why AutoML is so impactful:

  1. Improved Accuracy — AutoML removes many manual operations in the process of model crafting. As a result, the task of continuously evolving our algorithms becomes that much easier.
  2. Simplify Deployment: AutoML makes model deployment and A/B-testing very easy.

Democratization: AutoML provides an abstraction over several common machine learning operations, allowing people without a specialized ML background to be able to perform model optimization tasks.

Outbrain AutoML

There are two major components to Outbrain’s AutoML infrastructure: offline and online. The offline part is all about dozens of machines constantly exploring new and better models. This component is called AutoML Search. Conversely, the online part is a series of tools that allow a new and potentially better model (found in the offline part) to be evaluated in a production A/B-test. If the candidate model yields better business KPIs in the A/B-test, it can quickly replace the main production model.

At the heart of the offline component, there is a high performance, highly parallelized search engine. The search engine’s goal is to automatically iterate over numerous potential models and look for the best performing one. Data scientists constantly launch search tasks to test various hypotheses they have. These tasks can have a multitude of purposes, such as trying different or new feature sets, trying new revolutionary feature formulations, tuning hyperparameters, etc. The process of hypothesis testing is automated, allowing the data scientist to test many hypotheses simultaneously in a short time.

Our online production system serves 1B requests a day, so our SLA is very strict. Any model used for serving is constantly updated with new data. One can imagine that without proper tooling and infrastructure, A/B-testing a new model could be a very complex task. The AutoML suite allows the data scientist to follow a few easy steps to safely deploy a new model to production and to launch an A/B-test on it. Typically at any given time, we have 3–5 simultaneous A/B-tests running in production. In many cases, A/B-tests do not improve the business KPI and the idea is to fail fast.

Example Use-case

Let’s look at a use-case example: a data scientist would like to test the best way to utilize the ad’s creative features, such as the textual title and the attached ad image, on a CTR prediction model. There are many ways to formulate title related features and even more ways to encode image related signals. All the data scientist needs to do is to set up an AutoML search session by setting which types of possible feature encodings to explore. Potentially thousands of encodings are evaluated and the best performing formulation is found.

Let’s assume that after searching for 10 hours, 5000 models were evaluated and a new potentially better model was found. The data scientist can now manually review this new model and decide to launch an A/B-test in order to measure the business KPI — in our case revenue-per-impression for CTR prediction. If the A/B-test proves improvement on the business KPI, the new model can easily replace the existing model by applying a simple configuration change.

Conclusion

In this post, key concepts of the Outbrain AutoML system are introduced. The Outbrain AutoML suite proved to be helpful in overcoming the inherent challenges of a high scale, business-critical, machine learning production system. AutoML helped to efficiently push forward our production system model performance and the downstream business KPIs. In a later post, we will share some more specific architectural details and specific use-cases of the Outbrain AutoML system.

--

--

Assaf Klein
Outbrain Engineering

Software Engineer, Data Scientist, Cyclist and a family man. Yield Optimization Manager at Outbrain