This Unique Test Enabled Us to Automate PPC Campaigns

Automation of user acquisition from PPC is a very helpful tool, but there was a real challenge in testing it. Natural Intelligence found a comprehensive solution

Matar Bar Sheshet
Natural Intelligence
5 min readSep 9, 2020

--

The digital world allows us to examine the performance of an online product in a precise and quantifiable way. One of the most popular methods for monitoring performance is the A/B testing (or split testing). The idea is to compare two versions of the same product (whether it’s a pop-up banner, a CTA button, a header image or the copy for a sub-header) and to track and document the user’s behavior down to its smallest details.

One of the use-cases for A/B testing for Natural Intelligence is to test the performance of the pay-per-click campaigns, when using an advanced algorithm to manage it compared to the manual method. One of our main challenges was: creating a suitable testing environment that allows us to examine such a major change in our campaigns, at a large scale and without interfering with ongoing manual campaigns.

Growing by automating

Before getting into how we tackled this problem, here’s a little background:

Natural Intelligence (NI) is a global leader in comparison marketplaces. Some of the traffic to our sites originates from high-intent search activity. In fact, NI is one of Google’s top 50 clients and spends hundreds of millions of dollars per year on PPC campaigns for user acquisition for hundreds of its comparison websites.

Our marketing department includes tens of talented campaign managers, focused solely on running and optimizing these campaigns. They know every little detail about their campaigns and are using their special know-how and long-proven methodology to optimize PPC campaigns.

Yet, we realized that the best way to grow our business and our campaigns exponentially, while maintaining this high standard of optimization, is to automate. That way, each campaign manager could manage several campaigns simultaneously.

On top of that, automating your ppc optimization has many additional benefits — a computer can take much more factors into account while optimizing campaigns and to do it more frequently than a human could do.

So, we decided to develop such an automation by ourselves — one that will incorporate advanced machine learning methods with our vast experience and special know-how. Now, all that was left was to test how automation affects our PPC campaigns, when compared to manual optimization.

The problem with the existing methods

Bidding is a major part of the PPC work — deciding how much we should bid for each keyword we advertise to reach maximum impact for minimum cost. Our automation aims to do just that — set the most cost-effective bid for each keyword.

Let’s say we want to test our automation with a four keyword campaign: shoes, boots, sandals, and high heels. The more traditional ways to run such a test are: (1) splitting traffic; or (2) splitting keywords.

With the first method, we randomly split users into a test group and control group. This method generates trustworthy and accurate results. However, the process of splitting traffic for a large number of PPC campaigns is slow and cumbersome, which prevents us from being able to do it on a large scale. It also affects active campaigns, causing disruption to our campaign managers.

The second method involves splitting the keywords into two groups. For instance, shoes and boots will be the test group, while sandals and high heels will serve as the control group. With this method, it is much easier to test on a large scale, while causing minimal disruption to our campaigns.

However, we should take the results of the second method with a grain of salt. The fact that the control and test groups contain different words creates a background noise that interferes with our test results. For example, if we run this test during the summer, the results will naturally favor sandals; while winter-time testing would favor boots. In both cases, we have no way of knowing to what extent automation is responsible for the results.

The A/B swap method

We needed a way to test PPC automation that allowed us to do massive testing and get trustworthy results, while causing very little disruption to our ongoing campaigns.

Our solution was a simple tweak to the aforementioned keywords-splitting method. Instead of just splitting, we are splitting and swapping keyword groups.

Here’s how the ‘A/B Swap’ method works. In the beginning of the test, we split the keywords into two groups, just like we did in the second method. Let’s say we put shoes and boots in the test group and sandals and high heels in the control group.

The next day, we let the algorithm and campaign managers make whichever changes they need to make to optimize the campaigns.

Then, we switch the groups so that shoes and boots become the control group and sandals and high heels become the test group. We swap the two groups every day until the end of the test.

Like the method that involved splitting keywords, the A/B Swap allows us to run the test on a large scale with minimal disruption to the manual work. Better yet, by swapping the groups each day, we filter out all the background noise and actually get a result we can trust. Finally, by eliminating noise, we get another benefit: we are able to complete experiments 50–75% faster, compared to the simple keywords-splitting method.

Having a PPC automation algorithm is a big breakthrough for Natural Intelligence, because it allows us to grow our campaigns exponentially. But a great hypothesis (in this case the hypothesis was that our algorithm would outperform manual PPC) only has value if it can be proved or disproved as a result of testing. And the A/B Swap testing platform proved to be doing just that.

Matar Bar-sheshet is a data product manager, leading the track of PPC automation at Natural Intelligence. With a team of data scientists and software engineers they developed both the automation’s algorithms and testing platform described in this blog post.

--

--