Yuanchen He on finishing third in the Melbourne University competition

Kaggle Team
Kaggle Blog
Published in
3 min readMar 1, 2011

I am Yuanchen He, a senior engineer in McAfee lab. I have been working on large data analysis and classification modeling for network security problems.

Method

Many thanks to Kaggle for setting up this competition. And congratulations to the winners! I enjoyed it and learned a lot from working on this challenging data and reading the winners’ posts. I am sorry I didn’t find free time last week to write this report.

The data came with a lot of categorical features with a high number of values. At the very beginning, I removed useless features (by weka.filters.unsupervised.attribute.RemoveUseless -M 99.0) and removed the features with almost 100% missing values. After that, I tried to transform the categorical features into a group of binary features with each is a yes or no on a specific value. I also generated 4 quarter features and 12 month features from startdate and generated binary indicator features for missing values. The binary features, date-based features, indicator features, as well as other numerical features, after simply filling missing values with mean, were fed into R randomForest classifier for RFE. With that I got 94.9x on the leaderboard. I kept tuning along this way but the accuracy cannot be improved further. Then I started to suspect there were some information loss during the process of feature transformation and feature selection.

So I tried to build classifiers directly on the categorical features without transforming them into binary features. A simple frequency based pre-filtering was applied. For a raw categorical feature, all values presented less than 10 instances in the data were combined into a specific common value “-1”. However, R randomForest cannot accept a categorical feature with more than 32 values. So I had to split each categorical feature again into “sub features”, with each has no more than 32 values. The way I split the values into different sub features was sorting the values with information gain first, and then top 31 values were assigned into sub feature 1, the next 31 values were assigned into sub feature 2, and so on. With this feature transformation strategy I got 94.6x on the leaderboard.

The next one I tried was simply combining the top features from the above two methods. The randomForest classifiers on the combined feature sets can improve the leaderboard ROC to 95.1x-95.3x, depending on the instances used for training. The best classifiers were generated from training only on instances after 0606, only on instances after 0612, and only on instances after 0706. Finally, I observed the prediction results from these classifiers were different enough and hence it was worth to make a major voting from them, and I got my best leaderboard AUC 95.555, which generalized to the other 75% test instances with the final AUC 96.1051

Originally published at blog.kaggle.com on March 1, 2011.

--

--

Kaggle Team
Kaggle Blog

Official authors of Kaggle winner’s interviews + more! Kaggle is the world’s largest community of data scientists. Join us at kaggle.com.