Highlights of NeurIPS 2018

The Hive
The Hive
Published in
8 min readDec 23, 2018

By Mohan Reddy, CTO of The Hive

The biggest machine learning conference, NeurIPS 2018, was held in Montreal, Canada from December 2–8, 2018.

NeurIPS 2018 is one of the biggest gatherings of AI researchers and this year’s tickets were sold out in less than 12 minutes. After a huge backlash, the name of the conference board agreed to change the official acronym to NeurIPS. Also this year many changes were made to create a welcoming environment for women and other underrepresented groups. The broader themes of the conference were Accountability and Algorithmic Bias, Trust, Robustness and most importantly, Diversity.

This is the biggest ML conference so far with approximately 8000 attendees and around 1000 accepted papers.

Here are some of the notable highlights of the conference.

For the first time, NeurIPS asked speaker Laura Gomez to talk about the lack of diversity in the tech industry, which leads to biased algorithms, faulty products, and unethical tech.

Test of Time Award

Progress in machine learning (ML) is happening so rapidly, that it can sometimes feel like any idea or algorithm more than 2 years old is already outdated or superseded by something better. However, old ideas sometimes remain relevant even when a large fraction of the scientific community has turned away from them. In the specific case of deep learning (DL), the growth of both the availability of data and computing power renewed interest in the area and significantly influenced research directions.

The NIPS 2007 paper “The Tradeoffs of Large Scale Learning” by Léon Bottou (then at NEC Labs, now at Facebook AI Research) and Olivier Bousquet (Google AI, Zürich) is a good example of this phenomenon. This seminal work investigated the interplay between data and computation in ML, showing that if one is limited by computing power but can make use of a large dataset, it is more efficient to perform a small amount of computation on many individual training examples rather than to perform extensive computation on a subset of the data. This demonstrated the power of an old algorithm, stochastic gradient descent, which is nowadays used in pretty much all applications of DL.

Fig: Test of Time award

Best Paper Award: Neural Ordinary Differential Equations

The main idea behind this is defining a deep residual network as a continuously evolving system, and instead of updating the hidden units layer by layer, define their derivative with respect to depth instead. I think the most interesting aspect is that treating our system as a continuous time model allows us to predict continuous time systems. They show a way to take data which arrives at arbitrary times, rather than at fixed intervals, and they can predict the output at arbitrary future times. They test this on fairly simple synthetic data, predicting trajectories of spirals, but get really nice results. I definitely want to see more of this type of work in the future, on larger real-world problems, to see how it does.

Invited Talk: What Bodies Think About: Bioelectric Computation Outside the Nervous System, Primitive Cognition, and Synthetic Morphology — Michael Levin

Biology has been computing long before brains evolved. Somatic decision-making and memory are mediated by ancient pre-neural bioelectric networks across all cells. Despite their brains being liquefied during metamorphosis, caterpillars retain their memories as butterflies.

Planarians (flatworms that can regenerate parts of their body) also regenerate their memories when their brains are removed. If a planarian is cut into pieces, each piece regrows the rest of the body. Regeneration is a computational problem. There are huge opportunities for ML applied to regenerative medicine. Somatic tissues (cells in the body) form bioelectric networks like the brain and make decisions about anatomy. This was an incredible talk.

Invited Talk: Reproducible, Reusable, and Robust Reinforcement Learning — Joelle Pineau

Reinforcement learning (RL) has been shown to be an effective mechanism for learning

complex tasks via interaction with the environment. Recent advances in combining deep neural networks with RL have resulted in powerful tools that outperform previous state-of-the-art methods for many domains including robotics, video games, and board games. However, due to the interactive nature of these algorithms, as well as both intrinsic and extrinsic stochasticity, learning performance can be highly variant and difficult to reproduce. Furthermore, reusing information between tasks using these techniques can be problematic since they may overfit to a single task or environment. Joelle Pineau from FAIR talks about Reproducible, Reusable, and Robust Reinforcement Learning

Fig: Slide from Joelle Pineau’s talk on Reinforcement Learning

Invited Talk: Designing Computer Systems for Software 2.0 — Kunle Olukotun

Kunle Olukotun explained that despite the end of Moore’s Law, new architecture will be created for ANN. They will be more efficient (more compute power per Watt). So don’t worry, the huge models today will look small tomorrow. The use of machine learning to generate models from data is replacing traditional software development for many applications. This fundamental shift in how we develop software, known as Software 2.0, has provided dramatic improvements in the quality and ease of deployment for these applications. The continued success and expansion of the Software 2.0 approach must be powered by the availability of powerful, efficient and flexible computer systems that are tailored for machine learning applications. The full-stack design approach integrates machine learning algorithms that are optimized for the characteristics of applications and the strengths of modern hardware, domain-specific languages and advanced compilation technology designed for programmability and performance, and hardware architectures that achieve both high flexibility and high energy efficiency.

Unsupervised Learning

Alex Graves talks about how unsupervised machine learning has a higher signal than supervised machine learning but the learning objective is not very clear. Broad categories of unsupervised learning objective — 1. Learning to model data 2. Learning to model representations.

Fig: Conclusions slide from Alex Graves’ talk on Unsupervised Learning

Others

PyTorch developer ecosystem expands, 1.0 stable release now available

As the PyTorch ecosystem and community continue to grow with interesting new projects and educational resources for developers. The latest version includes capabilities such as production-oriented features and support from major cloud platforms.

Conclusions

As seen from many of the talks and papers at NeurIPS 2018, many of the world’s best researchers are working on improving the core algorithms and we have entered the golden era of reinforcement learning. We will be seeing more on use of imitation learning where the agents learn directly from the human supervisor. Also we will see evolutionary strategies being used to solve the challenges faced in Reinforcement Learning. I am sure we are going to see more along these lines at ICLR 2019 and ICML 2019.

Finally here’s my reading list from NeurIPS 2018:

  1. FishNet: A Versatile Backbone for Image, Region, and Pixel Level Prediction by Shuyang Sun · Jiangmiao Pang · Jianping Shi · Shuai Yi · Wanli Ouyang
  2. Dendritic cortical microcircuits approximate the backpropagation algorithm by João Sacramento · Rui Ponte Costa · Yoshua Bengio · Walter Senn
  3. Pelee: A Real-Time Object Detection System on Mobile Devices by Jun Wang · Tanner Bohn · Charles Ling
  4. Dialog-to-Action: Conversational Question Answering Over a Large-Scale Knowledge Base Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, Jian Yin
  5. Neural Ordinary Differential Equations Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, David K. Duvenaud
  6. Towards Robust Interpretability with Self-Explaining Neural Networks David Alvarez Melis, Tommi Jaakkola
  7. Kalman Normalization: Normalizing Internal Representations Across Network Layers by Guangrun Wang · jiefeng peng · Ping Luo · Xinjiang Wang · Liang Lin
  8. HitNet: Hybrid Ternary Recurrent Neural Network by Peiqi Wang · Xinfeng Xie · Lei Deng · Guoqi Li · Dongsheng Wang · Yuan Xie
  9. GILBO: One Metric to Measure Them All Alexander A. Alemi, Ian Fischer
  10. The Importance of Sampling in Meta-Reinforcement Learning by Bradly Stadie · Ge Yang · Rein Houthooft · Peter Chen · Yan Duan · Yuhuai Wu · Pieter Abbeel · Ilya Sutskever
  11. On the Dimensionality of Word Embedding by Zi Yin · Yuanyuan Shen
  12. Mesh-TensorFlow: Deep Learning for Supercomputers by Noam Shazeer · Youlong Cheng · Niki Parmar · Dustin Tran · Ashish Vaswani · Penporn Koanantakool · Peter Hawkins · HyoukJoong Lee · Mingsheng Hong · Cliff Young · Ryan Sepassi · Blake Hechtman
  13. Robot Learning in Homes: Improving Generalization and Reducing Dataset Bias by Abhinav Gupta · Adithyavairavan Murali · Dhiraj Prakashchand Gandhi · Lerrel Pinto
  14. Step Size Matters in Deep Learning by Kamil Nar · Shankar Sastry
  15. Precision and Recall for Time Series By Nesime Tatbul · Tae Jun Lee · Stan Zdonik · Mejbah Alam · Justin Gottschlich
  16. Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation by Matthew O’Kelly · Aman Sinha · Hongseok Namkoong · Russ Tedrake · John Duchi
  17. Exploration in Structured Reinforcement Learning By Jungseul Ok · Alexandre Proutiere · Damianos Tranos
  18. Hamiltonian Variational Auto-Encoder By Anthony L Caterini · Arnaud Doucet · Dino Sejdinovic
  19. How to Start Training: The Effect of Initialization and Architecture By Boris Hanin · David Rolnick
  20. Lifelong Inverse Reinforcement Learning Jorge Armando Mendez Mendez, Shashank Shivkumar, Eric Eaton
  21. Is Q-Learning Provably Efficient? By Chi Jin · Zeyuan Allen-Zhu · Sebastien Bubeck · Michael Jordan
  22. Reinforcement Learning for Solving the Vehicle Routing Problem By MohammadReza Nazari · Afshin Oroojlooy · Lawrence Snyder · Martin Takac
  23. Learn What Not to Learn: Action Elimination with Deep Reinforcement Learning By Tom Zahavy · Matan Haroush · Nadav Merlis · Daniel J Mankowitz · Shie Mannor
  24. Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents By Edoardo Conti · Vashisht Madhavan · Felipe Petroski Such · Joel Lehman · Kenneth Stanley · Jeff Clune
  25. Inference Aided Reinforcement Learning for Incentive Mechanism Design in Crowdsourcing Zehong Hu, Yitao Liang, Jie Zhang, Zhao Li, Yang Liu
  26. Fighting Boredom in Recommender Systems with Linear Reinforcement Learning By Romain WARLOP · Alessandro Lazaric · Jérémie Mary
  27. Towards Deep Conversational Recommendations By Raymond Li · Samira Ebrahimi Kahou · Hannes Schulz · Vincent Michalski · Laurent Charlin · Chris Pal
  28. Out-of-Distribution Detection using Multiple Semantic Label Representations by Gabi Shalev · Yossi Adi · Joseph Keshet
  29. Differential Privacy for Growing Databases Rachel Cummings, Sara Krehbiel, Kevin A. Lai, Uthaipon Tantipongpipat
  30. Data center cooling using model-predictive control Nevena Lazic, Craig Boutilier, Tyler Lu, Eehern Wong, Binz Roy, MK Ryu, Greg Imwalle

--

--

The Hive
The Hive

The Hive is a venture fund & co-creation studio based in Palo Alto, CA to co-create startups focused on AI powered applications in the enterprise.