What we learned at ICLR 2018

Oh yeah, and Andrei ran a half marathon with his wife Amy on Sunday in Vancouver!

Part of our culture at integrate.ai is to continuously challenge and advance the skills of our machine learning team. Building a platform with competitive differentiation is no easy task, and it’s important we stay on top of the latest in academic research.

One domain of particular interest is representation learning, as the choices we make in representing data have a huge impact on algorithmic performance. Attending ICLR last week, we were on the hunt for new techniques to increase our creativity. What follows is a quick summary of our favorite talks and posters!

(If this is up your alley, we’re hiring machine learning roles!)

Highlights from Invited Talks

Hype in the media often leads people to think neural networks are already architected like the brain. While that’s not quite true (although there are remarkable parallels between visual processing centers in the brain and convolutional neural nets!), Blake Richards showed how the neuroscience and machine learning communities could work together to inspire better neural network architectures and designs.

He talked about how developing architectures for neural networks might explain how credit assignment (i.e., a method for assigning credit to the neurons for their contribution to a particular behaviour) actually takes place in the brain. While backpropagation offers an effective solution to the credit assignment problem in artificial neural networks, neuroscientists have no evidence to support the credibility of backpropagation in biological neural networks (e.g., see this recent article by Guerguiev, Lillicrap, and Richards). Moreover, in order to be more in line with what the biology suggests, units in an artificial neural network should be considered as an ensemble of neurons, where the signal of a single unit represents the activities of its many neurons. The benefit in doing so is the ability to detect objects or latent variables at the group level, similar to the notion of capsules in CapsNet.

Guerguiev et al. (2017) illustrating the credit assignment problem and the machine learning solution

Another topic on everyone’s mind is concern about reusable and reproducible results in deep reinforcement learning research papers and projects (here’s the related paper). Joelle Pineau launched the reproducibility challenge at NIPS. The shift from traditional peer-reviewed journals to fast, rapid publications on arXiv is but one of many possible causes of current concerns around reproducibility. Democratization is opening up innovation, but it opens the floodgates beyond just a few, closely-trained peers. Beyond that, there is a lack of standardized definitions of baselines.

Having reusable and reproducible research enables the community to learn more deeply about new algorithms, spurring faster innovation and robustness. Pineau and collaborators launched the Elf OpenGo project, where researchers release open source code and share pre-trained models.

As Sarah Catanzaro discussed in our latest In Context podcast, reproducibility has different consequences in industry. Here what matters is performance and robust evaluation of which model to use in production. At integrate.ai, we make sure we can reproduce all candidate models and training sets (in super secure compute environments!) This enables accurate assessment of candidate models (both past and future) and guarantees that we can revert back to a previous version of a production model if needed.

Building a meaningful career as a scientist, of course, isn’t only about technical gymnastics. It’s about each person’s search to find and realize the irreplaceable impact we can have in our world.

That’s why Daphne Koller’s fireside chat was so powerful. She explained how she made choices in her career to optimize for impact, leaving the security of a faculty position to found Coursera. She wanted to use technology to expand learning beyond the privileged few, to bring Stanford to remote corners of the world. She also encouraged women to speak out on injustices that they face in the tech community. This is paramount for us at integrate.ai — we have a committee devoted to inclusion and diversity and apply the Rooney rule to ensure diversity in our hiring pipeline.

Would you like an auto-encoder with that?

There’s a lot to take in at conferences like ICLR. We focused on auto-encoders, a key focus for us to find more efficient representations and less biased representations.

A new variant of auto-encoders giving promising results is the Wasserstein Auto-Encoders (WAE). The WAE has a regularization term that is different from the one used to train Variational Auto-Encoders (VAE), encouraging a “global” training distribution for the latent representations to match the prior more closely; thereby leading to higher quality representations. Other papers we’re reading these days:

Tolstikhin et al. (2018) depicting the difference between VAEs and WAEs

So much for a brief report from the field. Next up is ICML! And again, if this is up your alley, come join our team!