Insilico Medicine at ICML 2018 recap

Alex Zhebrak
InsilicoMedicine
Published in
6 min readJul 26, 2018

Earlier this month our team attended the thirty-fifth annual International Conference on Machine Learning (ICML) in sunny Stockholm. This time we also decided to sponsor the event and received a lot of attention at our booth in the exhibition section. Here is a brief recap of my conference experience and some notes on interesting talks and papers we discussed during the event.

Insilico team at Insilico rooftop party

For those not familiar with the field, ICML is one of the largest academic conferences on machine learning bringing together several thousand professionals from industry and academia. This year, more than 600 papers were accepted with almost 50% more submissions compared to the last year.

While we at Insilico Medicine are mainly focused on applications of machine learning to drug discovery and biomarker development, ICML gathers together researchers with theoretical and applied expertise across multiple fields. We had a lot of fruitful discussions at our booth on topics ranging from applications of generative models for graph generation to protein structure prediction and transcription factors involved in aging.

Tutorials

The first day kicked off with a series of Tutorials. Imitation Learning tutorial presenters did a great job reviewing the field and its current progress. I can recommend it to anyone interested in this topic or just curious about Reinforcement Learning in general (by far the hottest theme this year). One of the main reasons we at Insilico are internally tracking major milestones in various domains across machine learning field is the idea that a broader view will bring more insights to our core research expertise. This is also a reason for machine learning companies to attend such events.

Imitation Learning tutorial

Toward Theoretical Understanding of Deep Learning tutorial by Prof. Sanjeev Arora explored several issues within the current deep learning framework including non-convex optimization, curse of dimensionality, overparameterization, depth-related problems in neural networks and more. With a lot of unsolved theoretical questions, this talk highlighted several possible research directions and provided some interesting insights such as an idea of measuring the generator/discriminator capacity against the resulting distribution in GANs.

Machine Learning for Personalised Health tutorial reviewed several approaches and challenges in the healthcare field. However, I found it lacking real examples and applications with a lot of general ideas known to everyone working in this domain.

It was also a first day of the sponsors’ exhibition, so after some minor pitfalls with our booth, we established a local headquarter and started talking to the audience. With eight of us attending the conference from Insilico, pretty much everyone spent more than a full working day in total discussing possible collaboration opportunities and sharing our vision and projects with visitors.

Insilico Medicine booth

Main conference and invited talks

Wednesday was the first day of the main conference track. After opening remarks, Prof. Dawn Song presented another hot topic getting a lot of attention recently — AI security and adversarial attacks on machine learning models. It was an excellent overview of common “gibbon”/“guacamole” problems and security issues in learning systems in general. The talk ended with a surprisingly relevant story about smart-contracts and personal data marketplace. Insilico Medicine and Bitfury are currently working on the similar project, converging artificial intelligence and blockchain technologies to create a safe ecosystem around healthcare data.

Obfuscated Gradients Give a False Sense of Security was not only the Best Paper Award recipient but also arguably the best paper presentation at the conference. I would recommend watching it to everyone as a great example of how you can make your paper talk interesting and informative without transforming it into a slide show of the submitted paper.

On Thursday, Dr. Max Welling, one of the brightest machine learning minds of the recent years, presented his vision of the field from physics and information theory perspective. He also expressed concern about the energy efficiency of modern neural architectures showcasing the work by our friends and colleagues from Dmitry Vetrov research group and other methods of compressing neural networks.

Lydia T. Liu in her Best Paper Award talk on Delayed Impact of Fair Machine Learning highlighted the increasing attention to fairness combined with lack of theoretical understanding of the matter. The research examines long-term effects on a group well-being and suggests outcome curve to measure fairness. In a very synergetic session on Friday, Prof. Joyce Y. Chai and Prof. Josh Tenenbaum presented methods and challenges of communication with smart agents. With multiple visual examples, they outlined obstacles such as common ground, causality inference, learning paths, and intuitive cognition.

Insilico rooftop party

What is more fun and more productive for networking than a company booth at the conference? Evening party with some drinks, snacks and a perfect view! During the first two days of the event, we gave away dozens of invitations to attendees interested in knowing more about Insilico or just working on relevant topics. We had a great time talking not only about machine learning for drug discovery and aging but also discussing black holes and observable universe, artificial intelligence for the aviation industry, economic consequences of loans in the middle ages and more. After all, networking is probably the most valuable experience you can have at the conference.

Sunset at Insilico rooftop party

Papers

Here goes a list of papers I noted before the conference, stumbled upon during the poster sessions or listened to at the oral talks and workshops. Well, some of them. These are mostly related to research topics we explore at Insilico Medicine.

Generative models

  • ARAE — discrete adversarial autoencoder with learnable prior
  • FactorVAE — independence in code distribution with an extra loss term
  • Geometry Score — topological scoring of generated objects
  • GLO — learning noise-to-image mapping with reconstruction loss
  • RFGAN—augmenting GAN training with pre-trained autoencoder features
  • Mixed Batches — true-fake ratio in discriminator against mode collapse
  • TAN—smooth adversarial training with intermediate discriminator lens
  • GAIN—generative imputation of the missing data
  • NAF—replacing IAFs’ affine transformation with a neural network
  • CNP—learning conditional distribution over functions
  • Learning Representations and Generative Models for 3D Point Clouds

Graphs

Domain adaptation

  • RadialGAN — domain adaptation with extra cycle-consistency terms
  • CyCADA—adversarial domain adaptation via semantic consistency losses
  • Augmented CycleGAN—m2m domain adaptation with a smart noise
  • MAGAN — domain adaptation with correspondence loss between domains
  • JointGAN—learning marginals and conditionals in domain adaptation

Conclusion

Overall it was a great experience for me personally and for the company in general. Now it is time to do the follow-ups and try all the new ideas we noticed. Attending [and sponsoring] such events is an excellent opportunity to talk to researchers from different companies and institutions, share experience and promote your work.

* random notes

  • You can cover your conference expenses by winning NVIDIA Titan V.
  • During the conference you should focus on taking as many vitamins and immune system boosters as you can, since talking to an enormous amount of people from all over the world in a closed venue is a pretty unhealthy idea (at least in the short term).
  • Stockholm outskirts are great for running, walking or cycling.

--

--