Chucking everything into a Random Forest: Ben Hamner on Winning The Air Quality Prediction Hackathon

Kaggle Team
Kaggle Blog
Published in
3 min readMay 1, 2012

We catch up with Ben Hamner, a data scientist at Kaggle, after he won Kaggle’s Air Quality Prediction Hackathon. As a Kaggle employee, he is ineligible for prizes.

What was your background prior to entering this challenge?

I graduated from Duke University in 2010 with a bachelors in biomedical engineering, electrical and computer engineering, and mathematics. For the next year, I applied machine learning to improve non-invasive brain-computer interfaces as a Whitaker Fellow at EPFL. On the side, I participated in or won a number of machine learning competitions. Since November 2011, I have designed and structured a variety of competitions as a Kaggle data scientist.

What made you decide to enter?

I was hanging out at Splunk (one of the SF venues hosting the hackathon). Anthony asked me some questions about extracting features from the data, which prompted me to open it up and look at it in the afternoon.

What preprocessing and supervised learning methods did you use?

I took the lagging N components from the full time series (N=8 for the winning submission, which was selected arbitrarily) as features, then each of the 10 prediction times and 39 pollutant measures as targets. I then trained 390 Random Forests over the entire training data, one for each predicted offset time-pollutant combination. The Random Forest parameters were selected so that the models would be quick to train. The code for creating the winning model is available here.

Some straightforward approaches to improving this model include

  • Optimizing the parameters for model performance as opposed to training time.
  • Directly optimizing for the error metric (mean absolute error) instead of RMSE.
  • Using a data-driven approach to select the number of previous time points to include.

What was your most important insight into the data?

I don’t believe I had any specific insights on the data — I barely looked at it before training the model.

Were you surprised by any of your insights?

I was surprised that domain insight wasn’t necessary to win the hackathon. Key insights have been crucial in many of our longer-running competitions.

Which tools did you use?

Once I decided to fiddle with the data, I asked David (a fellow Kaggle data scientist) to pick a random number between one and three. He picked two, and I used MATLAB. (If he said one I would have used R, and three would have been Python).

What have you taken away from this competition?

Taking all the features and chucking them into a Random Forest works surprisingly well on a variety of real-world problems. This is demonstrated more empirically in this paper. I’m very interested in domains such as CV and NLP where this doesn’t hold true, or where the problem can’t be simply formulated in the standard supervised machine learning framework.

What did you think of the 24 hour hackathon format?

It was a lot of fun! I especially enjoyed seeing Kagglers in venues all over the world collaborating and competing on this problem. I’m curious to see how much better the results would be if we ran this as a standard competition over a couple months, and whether the work in the first day would comprise the majority of the improvement over the benchmark.

Originally published at blog.kaggle.com on May 1, 2012.

--

--

Kaggle Team
Kaggle Blog

Official authors of Kaggle winner’s interviews + more! Kaggle is the world’s largest community of data scientists. Join us at kaggle.com.