Rossmann Store Sales, Winner’s Interview: 2nd place, Nima Shahbazi

Kaggle Team
Kaggle Blog
Published in
6 min readFeb 3, 2016

Rossmann operates over 3,000 drug stores in 7 European countries. In their first Kaggle competition, Rossmann Store Sales, this drug store giant challenged Kagglers to forecast 6 weeks of daily sales for 1,115 stores located across Germany. The competition attracted 3,738 data scientists, making it our second most popular competition by participants ever.

Nima Shahbazi took second place in the competition, using his background in data mining to gain an edge. By fully exploring and understanding the dataset, Nima was able to engineer features that many participants overlooked. In fact, one valuable feature was developed from selected data that many Kagglers had removed from their training set entirely.

The Basics

What was your background prior to entering this challenge?

I am a PhD candidate at the Lassonde School of Engineering at York University under the supervision of Prof. Jarek Gryz and co-supervision of Prof. Aijun An. My main research area is data mining and machine learning. I love problem solving and challenging myself to find the best model for regression and classification/clustering problems. Before entering Kaggle competitions, I used to work on data analysis problems; the most important one was the time series prediction in FOREX market for which I have designed many algorithmic trading strategies.

How did you get started competing on Kaggle?

About 9 months ago, I was informed by one of my friends that the IEEE International Conference on Data Mining series (ICDM) has established an interesting contest in data mining. In that competition participants are tasked with identifying a set of user connections across different devices without using common user handle information such as name, email, phone number, etc. Moreover, participants were going to be asked to figure out the likelihood that a set of different IDs from different domains belong to the same user and at what performance level. I got very excited and start working on that problem. Finally I ranked 7 thin that competition and have learned lots of new approaches for machine learning and data mining problems. I realized that in order to be successful in this field you should challenge yourself with real world problems. Although the theoretical knowledge is a must, without having experience in real world problems you will not able to succeed.

What made you decide to enter this competition?

After the ICDM contest I realized that some Kaggle competitions are referenced in NIPS papers. That made me more motivated to join other competitions. I found the Rossmann challenge very interesting since a sales forecast is useful for any company. I really wanted to learn the latest data mining approaches for solving these problems as in the Kaggle community some of the participants share their approaches at the end of the competition. I entered to be involved in the competition and to give myself a chance to win. At the end, I was able to design a model which was consistent both on the public and private leaderboards.

Let’s Get Technical

What was your most important insight into the data? What preprocessing and supervised learning methods did you use?

I spent a great amount of time digging through the data. It was mentioned in the competition evaluation page that any “day” and “store” with 0 sales is ignored in scoring the test or train set. But in the test and train data we do have many stores with sales equal to zero (why bother to include them!). At first it might seem that we can totally remove the rows with sales equal to zero (which are the stores that are closed on that specific day). Also all the scripts in the Kaggle community remove the zero sales before starting to create a model. I thought that there must be something related to these zero sale days. So I did not remove them and started extracting knowledge from them. I was surprised when I found out that there was a relation between consequence close (zero sales) and unexpected sales before or after opening the store. For example see Figure 1 for store number 1039. The zero sales (store closed) are highlighted with red and corresponding sales before and after those days are shown with green bars.

If I removed the zero sale rows I would not be able to tell my model to learn that specific pattern of unusual sales. So I go went the data and find those zero sales and create a dummy variable called MyRowHoliday to capture the information for these unusual sales. This dummy variable assigned positive integers before and after the consecutive close and “-1” on other days as shown in Figure 2.

I have created 5 more dummy features ( refurbishment, MystateHoliday, MySchoolHoliday, MyPromo and Promo2Active) that change these categorical variables to large range of numbers. For example, in the original data the days that have promotions are marked with 1; otherwise 0. I found out that this information is not sufficient for the learners because the beginning and end of the promotion have some effect on sales. Like other teams, I used extreme gradient boosting ( xgboost) as a learning method.

Were you surprised by any of your findings?

Yes, I always got surprised when even one of my finding showed better performance on the leaderboard. The most important one was when I added four more features for time series analysis. Those were simple moving averages ( MA_Fast, MA_Slow, MA_Customer_Fast and MA_Customer_Slow) for sales and customers over different time windows. The moving averages were split by important feature like store number, day of week and promotion. The model I built based on moving averages blended well with my previous model and made my rank 3 rdin the last week of the competition.

Which tools did you use?

For preprocessing and exploratory data analysis I used R and I usually write the code both in R and Python.

How did you spend your time on this competition?

I spent more than 70% on feature engineering, and 30% on feature selection, model ensembling, and model tuning.

What was the run time for both training and prediction of your winning solution?

I did not use a super-fast machine. For this competition I used a machine with 8-core CPU and 16GB of RAM. I ran the models in parallel on my laptop and my own desktop computer. When I woke up I started coding and digging into the data, and while I was sleeping I let the computers run my algorithm for model tuning. The winning solution has 15 models which took more than 25 hours to build and predict.

Words of Wisdom

What have you taken away from this competition?

First of all, the money. To be honest I was thinking about the prize since I joined the competition. And now I am confident in my abilities that I can win a prize in a world-wide competition with more than 3,300 teams around the world. Plus, I learn a lot anytime I go through the Kaggle forum or scripts.

Do you have any advice for those just getting started in data science?

First of all, make sure you understand the math behind regression and classification and the way that a model learns. And the most important thing you should learn is how do learning methods dealing with regularization to avoid overfitting. Second, fully understand the principles of cross validation, and which type of cross validation fit your problem. Finally, do not spend lots of time tuning the model. Instead, spend your time extracting features and understanding the data. The more you play with the data the more you will find interesting insights.

Just for Fun

If you could run a Kaggle competition, what problem would you want to pose to other Kagglers?

I recently saw a competition on a specific cancer with the goal of helping to prevent it by identifying at-risk populations. I would really like to see more cancer related data mining competition and specifically any research related to Cholangiocarcinoma cancer. The rates for this kind of cancer have been rising worldwide over the past several decades [1] and beside that one of my relative is suffering from it.

What is your dream job?

A dream job for me is a job in which I can work for myself.

Bio

Nima Shahbazi

is a second-year PhD Student in the Data Mining and Database Group at York University. He previously worked in big data analytics, specifically on Forex Market. His current research interests include Mining Data Streams, Big Data Analytics and Deep Learning.

[1] Patel T. “ Worldwide trends in mortality from biliary tract malignancies”. BMC Cancer 2: 10.

Originally published at blog.kaggle.com on February 3, 2016.

--

--

Kaggle Team
Kaggle Blog

Official authors of Kaggle winner’s interviews + more! Kaggle is the world’s largest community of data scientists. Join us at kaggle.com.