NIPS 2016 experience and highlights
It really was a crazy week for me, being first-timer both at NIPS and Barcelona. The impressions that I had compiled from all the colleagues and friends about the conference and the atmosphere there were fuzzy, so I decided to put all of the complex plans and spreadsheets with publications that I hoped to read and question authors about aside and just dive into the experience.
The first shock for me was the huge number of people that I had encountered at the conference even knowing few weeks in advance that the attendance rate skyrocketed from the previous year. It was somewhat surreal to notice and meet almost all of my twitter feed in person and have a chance to ask the authors of this year’s notable publications directly.
However, there were some downsides to the format of the conference with this enormous number of attendees. It turned out that unlike other conferences where there is a dedicated area for posters to be available for the entirety of the conference to check out, this years NIPS had daily rotation of posters, since there was huge number of accepted publications. This led to the area being very crowded and loud and it was nearly impossible to check and familiarise oneself with more than a handful of works. For me it took a weird form of walking through all the posters briefly reading titles and abstracts, recognising some works I have read on arXiv earlier this year, and only stopping to read few of them in entirety, since the dedication necessary to discuss the poster with author would take too much time to check all of the posters.
The overall experience I had was overwhelming in a good way. Similar to being hungry and encountering all you can eat buffet with such variety of dishes that it is impossible to try out even 10% of them. I guess if I had more systematic approach I could have gained much deeper insights into some of the areas. Alas, this year’s NIPS is over, but I hope to be able to focus better the next time.
There were quite a few highlights of the conference. Main one being the shift from the natural language processing to reinforcement learning. The others are quite a list of interesting works both from main track and from RL and large scale computer vision workshops that I attended.
- Great tutorials on variational inference and generative adversarial networks on the first day of the conference. These were somewhat general, but insightful lectures with descriptions of the state of the art approaches and models.
- Interesting work on Deep Learning for Predicting Human Strategic Behaviour. The aim was to predict the distribution of human preferences in a game like rock-paper-scissors.
- Really impressive and “I’m going to try out these ideas in my current research projects” works on recurrent neural networks: Using Fast Weights to Attend to the Recent Past and Phased LSTM. Both works present different approaches to cure the “forgetfulness” of plain LSTM networks.
- Fun presentation from Boston Dynamics with a sad question from the audience on the lack of ML in their robot control algorithms.
- Learning What and Where to Draw with impressive presentation of new image generation model.
- Symposium on recurrent neural networks with great talks from Jason Weston, Ilya Sutskever and Nando de Freitas.
- Very useful talks on How to Train a GAN and Nuts and Bolts of Deep RL Research with tips and tricks.
- A whole range of image to 3D shape GAN models on 3D Deep Learning workshop with impressive Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling.
- Large Scale Computer Vision Systems workshop. Great talks on TorontoCity Benchmark dataset, with results of road and building segmentation from satellite image accuracy close to that of OSM. ImageNet for transfer learning with range of experiments on pretraining models on subsets of ImageNet data and resulting change of performance on selected task. Talk on superresolution paper from Twitter.
All in all, this NIPS was incredible and had lots of genuinely insightful talks, conversations and papers that helped me better understand both the hot topics of today’s Deep Learning and formalise the ideas that I wanted to research.