ICLR New Orleans Roundup!

Team TRASH
3 min readMay 15, 2019

--

ICLR or the International Conference on Learning Representations, is a major annual machine learning conference that focuses specifically on cutting-edge research in deep learning, a powerful branch of artificial intelligence. This year, we got to go 😄 ✈️

St. Louis Cathedral in New Orleans, Louisiana

One of our data scientists, Franck, had an abstract accepted into ICLR as part of their Debugging Machine Learning Models Workshop. The abstract, Building Models for Mobile Video Understanding, explores the use of computational editing for mobile videos. 📱📹

Attending the Conference

With about 4000 attendees, ICLR is more intimate than NeurIPS, the other main conference covering deep learning. All of the major talks were held in a single auditorium, covering a wide variety of interesting topics such as generative models, training neural networks, fairness, and bias in AI, and the ethical use of machine learning, among many, many others.

Day 4 Poster Session in the Ernest N. Morial Convention Center

The conference was split into talks, workshops, and poster sessions. One of our favorite talks was by Ian Goodfellow on the progress of Generative Adversarial Networks, or GANs. A GAN is a combination of two competing neural networks that allow for the creation of synthetic data. The first network, the Generator, creates fake images. The second network, the Discriminator, must determine which images are real and which are fake. The better the Generator gets, the harder it becomes to tell real and fake images apart. In recent years, GANs have demonstrated tremendous utility in a wide variety of applications, from image and video generation to neuroscience and medicine, and have progressed, in terms of output quality, at a breathtaking pace.

Ian Goodfellow talk on Generative Modeling, from Facebook Live Stream

Perhaps relatedly, Zeynep Tufekci from UNC gave an interesting talk on ethical business models in machine learning, where she was critical of the AI practices of several large technology companies. Mysteriously, her talk has not been included in the Facebook Live Stream coverage of the conference, which included all the other talks. You can see the coverage here.

The Lottery Ticket Hypothesis

Though there were many interesting papers at the conference, one of our favorites was The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks by Jonathan Frankle and Michael Carbin. They found that large neural networks frequently contain much smaller subnetworks (winning lottery tickets) that perform similarly to the original networks. Remarkably, these subnetworks are significantly smaller than the original ones, at only about 10–20% of the size of the original networks. Smaller networks are preferable because they require less computational power to train and evaluate. Typically, training neural networks requires a lot of time and large numbers of expensive GPUs, which is prohibitive for many applications, such as mobile computing. Smaller, faster, and cheaper networks would heavily increase the number of ways deep learning can be used.

The Lottery Ticket Hypothesis, from Facebook Live Stream

Given the rapid pace of research in deep learning, we’re excited for the wealth of new and interesting discoveries by the time ICLR 2020, in Addis Ababa, rolls around! We hope to see you there!

– Govin & Team TRASH 😇

--

--