Building a Diverse Models Ensemble for Fashion Session-Based Recommendation for RecSys2022 challenge

Radek Osmulski
NVIDIA Merlin
Published in
4 min readSep 20, 2022

A team from NVIDIA took part in the recent ACM RecSys 2022 Challenge. The team consisted of several NVIDIA Kaggle Grandmasters and experienced RecSys Researchers and Engineers from the NVIDIA Merlin team. The team placed 3rd.

In this blog post, we provide an overview of the solution. We also highlight techniques that can improve the performance of real-life Recommender Systems.

Overview of the problem

The RecSys Challenge 2022, organized by Dressipi, focused on fashion session-based recommendations. The task was to predict the item that a customer will buy given a sequence of items that they clicked before.

Source: https://www.recsyschallenge.com/2022/

Key elements of the solution

The solution is a combination of 3 distinct stages.

The first stage consists of 17 models. The focus of this stage is to build a strong foundation for the ensemble. The team optimized both for individual model performance and model diversity.

The second phase is comprised of three models trained on the outputs of base models and a subset of features.

The ultimate, third level, is a weighted average of second-stage models.

The crucial detail here is the validation scheme. The team used the last month of train data as the holdout set. To prevent leakage, models from the second stage were trained using a Cross-Validation scheme on the first stage’s holdout set.

The third stage used CV folds from stage 2.

Of note is the diversity of models in the first stage. Some of these models, like Transformers, need sophisticated data preprocessing and training techniques. Others, such as Conditional Popularity, are very lightweight statistical-based models.

In a business setting, training and serving that many models might be very expensive from an operational standpoint. Still, in any ensemble, the majority of the value comes from the second or third model that we add.

Adding just one or two lightweight models can lead to strengthened recommendations!

Tabular Data Augmentation

On one hand, obtaining more data is a sure way to improve our Machine Learning solution. On the other, collecting new examples is often very costly or impossible in the short run. We often have to make do with what we have already collected.

Tabular Data Augmentation is challenging because it is often not clear how to alter our data. We want to create more examples, but we want to keep them plausible. In other words, we want our synthetic examples to capture the experience of our users.

For this challenge, the NVIDIA team grew their dataset by a stunning 5–10x! The team observed that viewed items co-occur in sessions very often with each other and with the purchased items. This realization led to the creation of variations of existing sessions by truncating or shuffling viewed items, swapping the purchased item with a viewed item (which then becomes the target), and using other approaches, as illustrated in the figure below.

It is important to note though, that while the data was augmented in the train set, predictions were performed on test data without modifications.

Tabular Data Augmentation is very dataset-specific. You can apply similar image augmentation techniques to different datasets, but with tabular data that is not the case. We have to be more creative.

The line of reasoning that the NVIDIA team followed can strengthen our toolbox! Growing our train set by 5–10x can go a very long way with modern Machine Learning techniques.

Summary

It does not make sense to adopt all the techniques used in a competition. But many can serve as an inspiration for building more performant machine learning systems.

Competitions can be great testing grounds and we would be remiss to not use them as a source of battle-tested information!

For additional details on the solution, please consult the paper authored by the participants that you can find here.

Thank you very much for reading!

Team

A collaboration of the participating team in the RecSys2022 challenge: Benedikt Schifferer, Chris Deotte, Gabriel de Souza P. Moreira, Gilberto Titericz, JiWei Liu, Kazuki Onodera, Ronay Ak and Sara Rabhi .

--

--

Radek Osmulski
NVIDIA Merlin

I ❤️ ML / DL ideas — I tweet about them / write about them / implement them. Recommender Systems at NVIDIA