An improvement of a bidding platform by using machine learning techniques

Akurey
Akurey
Published in
5 min readJul 17, 2019

Real-time bidding or RTB it’s a methodology used today by advertising companies to accelerate the way in which ads are transferred from the advertisers into the publishers, which usually are the websites in which these ads are shown (Imagine the banners in the side of any website you usually visit). This process is usually automated using different techniques in which the publishers release an auction of a specific space for an ad with a determined set of features such a website, size, location and other similar meta-data; and the advertisers bid to obtain this ad and try to maximize the number of conversions they get. A conversion, in this case, is when a user clicks on the impression (ad shown to him). When this happens, we can say the bid offered was a converted bid. This whole process occurs while the user loads a web page.

Image obtained from Internet Marketing Team.

Given the massive amount of data stored in the whole process, RTB is a perfect scenario to apply machine learning algorithms to optimize different metrics, depending on the need of the advertiser.

The project’s main objective was to improve metrics used by traders, people who manage a campaign by setting up different parameters to change the bids offered, such as click-through rate (CTR), which is a ratio of users who click on an ad to the number of impressions made in the campaign, cost per acquisition (CPA), which is the average cost of a conversion, among others.

At first, we’ve had different data columns such as the datetime of the bid, geolocation where the ad was shown, operating system of the device, browser used by the end user, format of the ad, etc. Using the geolocation of the bid, we could associate it with a type of weather (A set of variables such as pressure, humidity levels, etc). Additional to this, we pulled logs from other data source where the client stored which bids were converted. Once we had the data, the first stage of the whole process was to preprocess the data, this step took quite a long time given the big amount of data we were given (and we were just given a few months of data whose size surpass more than 600 TB). Choosing which columns influenced a conversion of a bid, treating differently the categorical column and deleting those which added too much information (More than 30 categories) and those that added too little (Less than three categories) was a process necessary before trying models.

We decided to use the AWS Sagemaker platform, which was quite new at the beginning of the project, and knowing the advantages of having all our computing power in the cloud and completely accessible for all the members of the team as well as our findings and implementation scripts shared using synchronized Jupyter notebooks, was the best call. The platform offered a straight usage of EC2 instances specialized for machine learning via Sagemaker’s libraries and API, connecting directly with models implemented in popular and community backed-up frameworks such as Tensorflow, Sci-Kit Learn and MxNet; or allowing us to implement our own model without worrying about a versioning problem or deployment issues. This solved our problem with the deployment of the model into a production environment by using the AWS resources and only wiring everything to its endpoints that connect to the I/O of the model.

Initially, we came across diverse problems such as querying the data, which was huge to load into an RDS database and had to be read using AWS Athena, unbalanced sample data sets, a problem that was solved once we got the whole training data from the production database, among others. Being this my first involvement in an industry-level machine learning project, watching and taking part in the solution to these kinds of initial problems was very enriching experience and allowed me to obtain more knowledge for solving similar problems in future projects.

Once the data was ready, we started testing different models. The client uses values called bid factors, coefficients that are associated to features such as city or DateTime (eg: Located in Texas, user connect at midnight, etc) and are modified by traders based on their professional judgment. These bid factors are used in a kind-of linear model, allowing the DSP of our client to decide where to place the bids of a given campaign. Knowing also that the problem was a binary classification problem (Will a bid be converted or not?), a generalized-linear model such as logistic regression was the main choice. In this case, the goal was to maximize the CTR of a given campaign. Explaining how logistic regression works is beyond the scope of this article, but if you’re interested, you can check out other online resources.

Other models were tested that in other studies displayed good results, such as factorization machines and field-aware factorization machines, but after comparing them with the logistic regression model, they did not achieve a better performance. In fact, the difference in accuracy obtained against our logistic regression classifier was about 8%.

Once the model was trained, we had to obtain the bid adjustments to change the bid factors in the production database. We used the weights of our Tensorflow model and applying a mathematical transformation, we had our new bid adjustments to update the bid factors of our campaign.

Testing the tool

The tool was tested in a campaign ran by a sports-coaching company focused on the triathlon training field. Two separate campaigns were created, one using the standard methods to determine where to bid used by the DSP of our client and the other one using our classifier. They were let perform freely for the first quarter of the year (Q1 2018) and at the end, the campaign managed by our tool displayed better results.

The campaign we used as our baseline obtained a final CTR of 0.081% compared to the CTR of the campaign optimize by our tool which was of 0.165%. This is an improvement of more than 100%. In the following chart there are the metrics obtained:

The next stage of the project was to implement the tool into a useful feature available to the traders, providing them with an option to optimize the performance of their campaigns. Personally, I’m glad that the project obtained successful results and happy about the value added to the client’s process.

Artificial intelligence is a tool that needs to be used in many other fields to automate processes done before by people, allowing these to focus on more important, non-rudimentary tasks, making the industry advance faster. Technology’s main purpose is to improve people’s style of living and automation is a path of achieving this. This new era will allow us to explore new ways of creating knowledge and bring innovation to the world, augmenting our capacities every day with the help of technology.

--

--

Akurey
Akurey
Editor for

We are talented engineers, artists, and leaders who create digital solutions and deliver technology projects with passion and quality. linktr.ee/akurey