Some general take aways from #NIPS2016

Igor Carron
3 min readDec 13, 2016

--

I am back home still recoiling from the information overload of NIPS2016.

A few people have already written some great insights to what happened there. Here are a few that struck me:

  • With the astounding success of Deep Learning algorithms, other communities of science have essentially yielded to these tools in a manner of two or three years. I felt that the main question at the meeting was: which field would be next ? Since the Machine Learning/Deep Learning community was able to elevate itself thanks to high quality datasets such as MNIST all the way to Imagenet, it is only fair to see where this is going with the release of a few datasets during the conference including the Universe from OpenAI. Control systems and simulators (forward problems in science) seem the next target.
  • The recent developments in deep learning have come in large part because most algorithms implementations have been made available by their respective authors. This is new and probably the reason why older findings have not resonated with the community (and the reason for the rift between some figures in the field). A paper is a paper, it becomes an idea worth building on when you don’t spend all our time re-coding that paper.
  • The touching tribute to David McKay brought home that we are not as unidimensional as we think we are.
  • There are certain sub-communities within NIPS that still do not seem to have high quality datasets. I fear they will remain in the backseat for a little while longer. As in compressive sensing before phase transitions were found, any published paper was really just a meeting of a random dataset with a particular algorithm and no certain way to figure out how that algorithm fitted with the rest. High quality datasets, much like phase transitions, act as acid tests.
  • I am always dumbfounded to find out that people read Nuit Blanche. I know the stats, it doesn’t take away the genuine element of surprise. Wow, and thank you !
  • Energy issues were bubbling up a little bit in different areas stemming from training large hyperparameter searches or learning-to-learn models but also in how to extract information from the brain.
  • The meeting was big. Upon coming back home, I had a few: “What ? you were there too ?” moments
  • I bet with someone that it would take more than 20 years to come up with a theoretical understanding of some of the recipes used in current ML/DL. It took longer for L_1 and sparsity.

Here are some insightful take-aways:

Tomasz pointed out some of the trends on Twitter.

Jack Clark’s newsletter before, during and after NIPS:

Paul Mineiro’s Machined Learnings: NIPS 2016 Reflections

and Jeremy Karnowski and Ross Fadely, Insight Artificial Intelligence

During the meeting, on Twitter, the Post-facto Fake News Challenge was launched.

--

--

Igor Carron

CEO LightOn , Co-organizer Paris Machine Learning Meetups #MLParis . Runs the Nuit-Blanche blog.