# I used an Algorithm to set floor prices; here is what happened

Oct 24, 2020 · 5 min read

Cason Wight

# Background

As an intern on the Data Science team at Xandr, I have had quite the learning experience this Summer. My team works on products for publishers to enable them in online advertisement auctions.

The central idea to my project is reducing bidder shading. In first price auctions, the bidder with the highest bid wins the auction and pays their own bid. When bidders start winning in this style of auction, they reduce (shade) their bids such that they continue winning auctions but do not have to pay their full valuation.

The only way that a price floor can have an effect in a first-price auction is by being higher than otherwise winning bids, cancelling the corresponding auction. Implementing a floor price could reduce “bid shading” by cancelling enough auctions to force bidders to change their behavior if they want to win. Instead of reducing bidder shading, the floor could cause the bidders to move to other buying platforms or simply have no reaction at all, resulting in the simple cancellation of some auctions and lost revenue.

My role in this problem consisted of three main steps:

1. Construct an initial algorithm for setting floor prices

2. Conduct an experiment on live auctions with this algorithm

3. Simulate a bidding environment to validate future algorithms

# Initial Algorithm

For an informative experiment, floor prices must be set at “reasonable” levels. These price floors must be noticeable to the bidders, but not too extreme.

According to economic theory, the optimal floor price for maximizing revenue is based on the distribution of the bids. Even though this theoretical best floor price relies on many assumptions, we used it as the floor price algorithm to test bidder response. I constructed an algorithm that approximates the bidding distribution in order to calculate this value.

The algorithm reads in bids for each hour, estimates an optimal floor price, and saves that price, to be used on the following day during that same hour. I was given a few pre-approved publishers for the experiment, so I first checked how this algorithm might affect the auctions of these publishers. The results showed small enough revenue impact for permission to conduct a live experiment. The chart below shows that for these placements, the optimal floor price is typically close to the median winning bid price. Thus using this floor price algorithm would initially cancel about half of auctions. This may seem extreme, but the floor prices are cancelling the lowest revenue auctions, not auctions at random. Thus, the initial revenue lost should be around 10–15%.

# Live Experiment

This algorithm was used to set floor prices on two placements. The experiment also looked at four similar placements (all six from the same seller), with no changes to floor price. The six tags were all on publishers in different European (mostly Nordic) countries. Denmark and Finland were given the floor-price change “treatment”. Sweden, Norway, Belgium, and the Netherlands were the “control” placements. The experiment ran for one week.

Once the experiment began, the floor prices immediately took effect. The bidding distributions led the floor price algorithm to set much higher floor prices for Denmark than for Finland (cancelling roughly 80% of auctions as opposed to 40%). Over the course of the experiment, the floor prices gradually increased for both experimental placements.

An initial statistical analysis of the experiment results reveals that average revenue per impression for the experimental tags dropped by about 5%, which is less than we expected with no response from bidders. There are many reasons that bidders may not have increased their bids.

One possible reason is substitution to other publishers. Substitution is when a buyer replaces one good for another, because they have similar value. This is phenomena is especially present when the first good has an increase in price, as in our experiment. During the time of the experiment, the fill rate for Sweden (control group) spiked, which could be showing a possible substitution effect; although, the average winning bid price did not increase for any of the control tag placements during the experiment (as shown by the plot below).

# Bidding Simulation

The next step of this project is to create a more intelligent floor-price algorithm. One such algorithm would be reinforcement learning, which is an algorithm that interacts with an environment to learn actions that maximize a reward. A reinforcement learning model can explore the best floor prices to maximize revenue after experiencing reactions from bidders.

Before a reinforcement learning algorithm can be implemented in live auctions, a proof-of-concept is necessary. In this case, a simulation that replicates bidder behavior works as a reasonable environment. In the simulation, each bidder has a set valuation for each auction, and shades the bid. The shading behavior shifts depending on how much of the budget goal the bidder is able to spend in the auctions.

This simulation should converge to an equilibrium bidding behavior in a simple setting with no changes in floor price. Once this convergence is working correctly, a reinforcement learning method for floor prices could be introduced. Using both the simulation and the reinforcement learning floors will give insight into whether a changing floor price can reduce bidder shading and increase seller revenue. Based on the results of this simulation, a live experiment with reinforcement learning for price floors can be conducted. This experiment may lead to eventual revenue increase for sellers, but it is still possible that floors only decrease revenue for publishers. Floor prices in first-price auctions remain a difficult problem.

Cason is a master’s student, studying Statistics at BYU. He enjoys visualizing data, playing tennis, and white-water rafting.

Written by

Written by