NeurIPS debrief
This year I was lucky enough to attend NeurIPS for the first time!
It was such an exciting experience that I wanted to share a little bit of what I learned
However before everything a small disclaimer: NeurIPS was huuuuge so what I’m sharing here is obviously a very limited and completely biased sample of the conference that do not do justice to all the amazing work that was presented! I thus encourage you to check by yourself this year incredible contributions here https://neurips.cc/Conferences/2019/Schedule and here https://slideslive.com/neurips/
Talks
- Social intelligence by Blaise Aguera y Arcas (Google) https://slideslive.com/38921748/social-intelligence
I really enjoyed this talk, Blaise gave several TED talks and is an amazing speaker. The story he tells raises questions about the way we approach machine learning and draws parallel with biological systems to suggest alternative approaches
- From System 1 Deep Learning to System 2 Deep Learning by Yoshua Bengio (U. of Montreal) https://slideslive.com/38921750/from-system-1-deep-learning-to-system-2-deep-learning
Excellent talk as well digging into the progresses that have been made in deep learning to move from System 1 tasks (done unconsciously by humans) to System 2 tasks (involving reasoning)
- Graph networks for Learning Physics by Peter Battaglia (DeepMind) https://slideslive.com/38921870/graph-representation-learning-1 (40 min)
How to use graph networks in the context of complex physical systems. I’m always amazed when I see models able to learn underlying laws of physics without having to explicitly encode them
- How to Know by Celeste Kidd (Berkeley) https://slideslive.com/38921495/how-to-know
Basically how do people learn? Very good talk
- Deep learning with Bayesian principles by Emtiyaz Khan (RIKEN) (https://slideslive.com/38921489/deep-learning-with-bayesian-principles)
My top choice for the “technical” talks, it was extremely elegant and comprehensive. The way beings learn is by interacting with the environment with small feedback. They apply transfer learning to other spaces and long life learning happen. Deep learning is very different as it is “bulk learning” (we assume everything is in the massive dataset we use to train). Bayesian learning is closest to how beings learn. How do we bring the two approaches together? By doing DL with bayesian principles
Posters
It is just not possible to summarize everything so I’m just going to pick 4 posters per day that really caught my eye
Day 1:
- Adversarial Examples Are Not Bugs, They Are Features (https://neurips.cc/Conferences/2019/Schedule?showEvent=15791)
- Semi-supervisedly Co-embedding Attributed Networks (https://neurips.cc/Conferences/2019/Schedule?showEvent=13761)
- Landmark Ordinal Embedding (https://neurips.cc/Conferences/2019/Schedule?showEvent=14135)
- Adversarial Fisher Vectors for Unsupervised Representation Learning (https://neurips.cc/Conferences/2019/Schedule?showEvent=15860)
Day 2:
- PIDForest: Anomaly Detection via Partial Identification (https://neurips.cc/Conferences/2019/Schedule?showEvent=15701)
- MixMatch: A Holistic Approach to Semi-Supervised Learning (https://neurips.cc/Conferences/2019/Schedule?showEvent=13648)
- SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems (https://neurips.cc/Conferences/2019/Schedule?showEvent=15712)
- AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification (https://neurips.cc/Conferences/2019/Schedule?showEvent=13706)
Day 3:
- Adversarial Music: Real world Audio Adversary against Wake-word Detection System (https://neurips.cc/Conferences/2019/Schedule?showEvent=15828)
- What the Vec? Towards Probabilistically Grounded Embeddings (https://neurips.cc/Conferences/2019/Schedule?showEvent=13830)
- Zero-shot Knowledge Transfer via Adversarial Belief Matching (https://neurips.cc/Conferences/2019/Schedule?showEvent=15831)
- Training Language GANs from Scratch (https://neurips.cc/Conferences/2019/Schedule?showEvent=13586)
Demos
- The coolest: exBERT: A Visual Analysis Tool to Explain BERT’s Learned Representations
- The funniest: F1/10: An open-source 1/10th scale platform for autonomous racing and reinforcement learning
The prettiest: Immersions — How Does Music Sound to Artificial Ears? (DeepDream for audio)
Some advices
As a first timer I did a lot of mistakes, here are a couple things I learn on how to NeurIPS!
- Have a plan!
The conference is crowded… I mean really crowded!
I did not realize I needed preparation in order to be efficient so plan in advance what you want to see, especially for the poster sessions!
2. Book your hotel early!
I got my ticket in the last round of the lottery and ended booking an hotel 30 min away by car. The days are long and you can use the extra sleep so make a reservation as soon as possible even if it means to cancel at the last minute
3. Network
Using the app is very useful and there is always something happening. I was feeling uncomfortable sending messages to people but it turns out that everyone is very nice! So overcome the imposter syndrome and reach out to the people you want to meet and do it early! A week seems like a long time but it will be over in heartbeat.
4. Take time for yourself
I didn’t…I ended up exhausted and full of regrets of not having enjoyed Vancouver enough. So to convince you not to do the same “a picture is worth a thousand words” (big thanks to all the members of the group Photography@NeurIPS)