Using AI and Machine Learning to Overcome Position Bias within Adobe Stock Search
Co-written with Judy Massuda
It can be both a key benefit, and a substantial challenge in digital retail: the sheer volume of products that can be made available to a potential customer at any given time. In a world of seemingly never-ending choice, search relevance and ranking play an important part in highlighting the diversity of products available and, ultimately, getting customers to select something that suits their needs. Customers don’t often scroll too deep into the results or go beyond the first page to find what they are looking for. This is even more important for us at Adobe Stock, our online content marketplace of over 160 million assets, as our unique business requires us to think about relevance and ranking in a slightly different way than the typical e-commerce business.
To begin with, our business model involves offering subscriptions. These subscribing customers return often to execute the same search queries with different end goals in mind, so content that is displayed at the top of the results should not always stay the same. In addition, we sell digital and not physical goods, and customers are often doing more than just using downloaded content without making adjustments. Instead, many are hoping to find inspiration from pieces of content that they can manipulate for their final project needs. Finally, in the world of stock content, unlike in traditional retail, hot items never sell out. Uploaded and approved content is usually there for the long-haul, which makes it even more important to ensure diverse results are displayed to customers.
While investigating the historical top results per query on Adobe Stock, we realized they had little variance. We saw that the same bestselling images always stayed on top. This lack of dynamism can hurt the experience for both sides of our marketplace. Our end-use customers need different, high quality, trending images over time. On the contributors’ side, if their newly uploaded images are not able to surface in the top results, it becomes harder for them to monetize their work.
For both of our marketplace customers, it was therefore important to improve the diversity of results generated by the search ranking algorithm.
Position bias analysis
We started with an analysis on engagement-by-position to understand why we weren’t seeing much variety in our historical results. We used download-through-rate (DTR) as the comparison metric, which is number of downloads normalized by the number of impressions at each position. On the Adobe Stock website, we display 100 images per page, and the engagement at each position amongst the first few pages is shown in the following graph.
As you can see from the graph, the DTR decreases exponentially by position. In fact, images in the first position had more than ten times the downloads that images at the thirtieth position on the first page had, and about half of the downloads happened at the top one third of the first page. Note that the periodic spikes in the graph are caused in part by pagination.
This exponential decay can be explained by one of two reasons:
1. Position bias, meaning that customers are more likely to download images in the top positions than images in the lower positions, regardless of the image quality and relevance.
2. The images in the top positions are superior to the images at lower positions in terms of relevance and quality.
We believed that this exponential decay was mainly caused by position bias, as we found that the relevance and quality didn’t vary much in the top results. To confirm this hypothesis, we implemented a version of the Standard EM algorithm described in the paper “Position Bias Estimation for Unbiased Learning to Rank in Personal Search” from Google. Given a query, this machine learning algorithm determines the change in engagement when showing the same image at different positions. We ran it on purchase and impression log data over a one year period. The propensity by position learned by the EM algorithm from customer purchase behavior is shown in the following graph. As we can see, if we show the same image at lower positions for the same query, the engagement drops significantly.
As shown on the graph, we found a huge position bias on the Stock website. Our customers tend to download images at the top of the results with limited scrolling, which is similar to the position bias seen with web search, where users are more likely to click on top results. However, we weren’t expecting this exponential position bias because people can process images much faster than web page descriptions. In addition, as stated in this recent research paper, people usually scroll to and click on items in much lower positions on the page in e-commerce environments than has been seen for web search. Other similarities that we found with web search were that our customers are more likely to download the 100th image (last image) than 95th image in the first page, and if they go to the second page, they’re more likely to purchase top results in the second page than bottom results in the first page.
Therefore, we decided to try adapting ranking methods used for web search for our use case. Once we calculated the position bias, we removed it from the training data and trained an unbiased ranking model. Similar to the approach described in the paper “Unbiased Learning-to-Rank with Biased Feedback”, we used a pairwise learning-to-rank method and incorporated Inverse Propensity Weighting w in the pairwise (p stands for positive, n for negative) margin loss:
We also added a new ranking feature called the unbiased DTR in the ranker. The idea was to normalize the number of purchases not only by number of impressions, but also by the position of each impression. For example, if an image is shown at first position, we would count one impression, while if it is shown at the thirtieth position, we only count it as one fifth of an impression because the propensity to purchase at the first position is five times larger than the propensity to purchase at the thirtieth position.
This ranker, developed using machine learning methods, helped us to increase diversity in our results as compared to the previous ranker in production. Half of the results from the new ranker as compared to the old one are different on the first page. Moreover, the site feels a lot fresher and more dynamic. One interesting update we noticed was the reflection of one of our predicted 2019 visual trends — Creative Democracy — in the updated search results. With more diversity in the results, we see more bright pops of color and variations in assets than what we saw before (see below):
With our analysis, we hope to move the dial in terms of improving the diversity and trendiness of images you’ll see when using Adobe Stock. We know there’s increased pressure out there to quickly find and use original, effective images in e-commerce campaigns, and for Adobe Stock’s incredible contributors, your work deserves to be seen and purchased by those that are looking for it. Stay tuned for more updates into our work.
 Xuanhui Wang, Nadav Golbandi, Michael Bendersky, Donald Metzler, Marc Najork. 2018. Position Bias Estimation for Unbiased Learning to Rank in Personal Search. https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46485.pdf
 Thorsten Joachims, Adith Swaminathan, Tobias Schnabel. 2018. Unbiased Learning-to-Rank with Biased Feedback. https://arxiv.org/abs/1608.04468