Insights on Montreal AI Symposium 2018

EAI Science Curation
Element AI Lab
Published in
7 min readSep 28, 2018

The Montreal AI Symposium was held at McGill University on August 28th, 2018. It gathered more than 600 participants, from the greater AI community of Montreal, bringing together world-renowned academics with industry leaders. The event covered a wide range of research topics, highlighting the latest research in artificial intelligence, machine learning and deep learning.

With the help of 70 reviewers, 10 oral presentations and 50 posters were selected from a pool of 85 submissions.

This is a short recap of the presentations held at the Symposium. If you want to see the whole thing, you can see the the videos of each session on the symposium website.

The Keynote talks

“Artificial Intelligence and the ‘Barrier of Meaning’”
by Melanie Mitchell, Professor at Portland State University (Portland, OR)

Beginning with a historical perspective of AI, Prof. Mitchell brought our attention towards the trustworthiness of deep learning models. She argued that the weaknesses and vulnerabilities of AI can be, at least in part, attributed to the discrepancy between what “understanding” means for a human and what machine level comprehension actually is. She described some of the issues with deep learning models, such as unreliability, poor generalization and lack of common sense, that can not only cause catastrophic errors like the Tesla car crash, but also increase models’ vulnerability to adversarial attacks. To help bridge the barrier of meaning, Dr. Mitchell suggests taking a closer look at the core components of human-like understanding, intuitive physics, causality, and theory of mind.

“(Un)fairness in AI vision and language”
by Margaret Mitchell, Senior Research Scientist at GoogleAI (Seattle, WA)

Given that data is a prerequisite for training machine learning models, its inherent biases can have a tremendous impact (e.g. knowledge extraction). Dr. Mitchell talked about how human biases in AI can be introduced as early as the data collection, annotation and selection process. With human biases introduced at the very beginning of the machine learning pipeline, a Bias Network Effect is created which can lead to the amplification of injustice (e.g. criminal face recognition, homosexuality prediction). She emphasized that it is a responsibility of ours as researchers to help influence how AI evolves. To illustrate, she gave an example from an extensive study they conducted on correlates of suicide risks, for which they made the decision of never publishing the results to prevent the media from taking them out of context.

The Panel on Montreal AI ecosystem

Philippe Beaudoin (Element AI) chaired a panel discussion with the following guests: Narjès Boufaden (Keatext), Yoshua Bengio (Mila, Université de Montréal), Mark Maclean (Montreal International), Joelle Pineau (McGill and Facebook Montreal) and Sylvain Carle (Real Ventures).

The main question was: “What are the concrete actions that one could take to reinforce the AI ecosystem in Montreal, particularly considering the link between academia and industry”.

1. The first point raised was regarding intellectual property. There was a call for mathematical formulae and code to be open-source, and not patentable. Open-sourcing and open-publishing may be the new way to work, instead of using the traditional defensive patent approach, which also tends to limit publication and exchange.

2. The second point brought up was regarding building a community and facilitating communication between each other. We need a better and more fluid communication inside the AI ecosystem, to promote the expertise and needs of both academic and industry areas.

3. Another point raised was about building a framework to attract and retain companies in Montreal. We should think about how to get companies to establish themselves in Canada (possibly moving to Montreal), and what we are lacking as compared to other countries. Continuing on these topics, a panelist asked: How do we continue our exponential growth and frame the future imagining a 10X growth? Suggestions pointed to increasing global connectedness and support for Canadian AI industries to continue growing.

4. Finally, a case was made towards our duty of transferring knowledge to decision makers, government entities as well as to the other organizations that have yet to integrate AI. It’s also our role to develop a healthy pipeline of students and give them the time to grow and mature as researchers.

The Element AI Presentations

FashionGen:

Element AI was represented at the Symposium by showing results from the FashionGen challenge launched by Element AI. The Generative Fashion Dataset developed with the luxury brand Ssense contains more than 325K full HD images with metadata (description, gender, category, …). The goal of the challenge was to generate fashion images from descriptions. The project uses neuropsychology (human learning in a multimodal way), degeneracy in neural structure, and Edelman’s idea of re-entrance.

Image from FashionGen

Reinforcement Learning:

Element AI applied research scientist Rick Valenzano presented work on Using Reward Machines for High-Level Task Specification and Decomposition in Reinforcement Learning, done at the University of Toronto in collaboration with Element AI. The project proposed to give the AI agent access to the reward function using a finite state automata-like structure called a Reward Machine. The agent can then use a reward machine to keep track of what phase of a subtask it is currently on. It can also decompose the reward machine into subtasks and simultaneously learn sub-policies for each.

The Technical Talks

DuckieTown:

DuckieTown is a simple platform for research and learning about autonomous driving and robotics. MILA students built a simulator with a virtual environment of DuckieTown to train models, adding domain randomization to help transfer form the simulator to the real world. For example, they can randomly add variable objects in the background of the simulator to train the robot to give more attention to the road and less on the objects in the background. They also use Docker to make experiments more reproducible. On the Machine Learning side, they have worked on imitation learning via uncertainty estimation and on a reward function to measure safety.

There will be a live competition at NIPS 2018: The AI Driving Olympics, with objectives related to lane following with and without dynamic obstacles, navigation, optimization, simulation and prototype. Can’t wait!

Live Demo: Simulation to real life transfer after training on the simulator

Generative Adversarial Networks:

There were three presentations about Generative Adversarial Networks (GANs). The main problems that were considered were:

  • How to discriminate between fake data and real data, and generate realistic images? This was addressed using Parametric Adversarial Divergence as learning objective for generative modeling.
  • How to design better algorithms for GANs. The speaker explained that GANs can be formulated as a Variational Inequality. Doing so encompasses most GAN formulations. They apply a stochastic version of the extragradient method (SEM) using a method that requires twice less gradient computation (OneSEM).
  • How to learn the base distribution in implicit Generative Models. It was proposed a two stage optimization procedure which maximizes an approximate implicit density model. They showed that their method outperforms probabilistic PCA, GANs and VAE (Variational Autoencoders) on two image datasets (MNIST, CELEB-A).

Conditional Class Dependencies:

Learning to Learn with Conditional Class Dependencies is about exploiting class dependencies in MiniImageNet using conditional transformation. The speakers used meta-learning approaches (e.g. memory-augmented network), metric learning such as matching network or prototypical network, learning gradient-based adaptive initialization (e.g. Reptile) and conditional transformation like Modulated ResNet and Conditional Instance Normalization.

Energy management applications:

There were three interesting presentations about applications of AI in energy management:

  • The first one was about Deep Learning for Smart Charging of Electric Vehicles. They developed a data-driven strategy to charge electric car smartly; managing energy and time meanwhile coordinating offer and demand to adjust for the fluctuating prices. This was a two step process, starting with dynamic programming to compute the optimal charging schedule. From these optimal decisions and other historical data, a supervised learning model was used to learn to make the right decision in real-time, without knowledge of future energy prices and car usage.
  • The second presentation was about Transfer Deep Reinforcement Learning for Residential Energy Management. The goal is to create a Smart Grid at local scale, which can forecast the short-term energy demand (hourly) on a house-by-house basis. Then using machine learning to improve short-term electricity load forecasting and reduce energy cost.
  • The final one was about Deep Photovoltaic Nowcasting. Solar energy is a renewable green energy but weather patterns are uncontrollable. The goal is to forecast future photovoltaic (PV) power at short time scale (hence “Nowcasting”), to help generators to kick in faster on cloudy days. They gathered 90 days of historical PV power data with photos of the sky above them, and used Deep Learning to identify the correlation between cloud cover and PV power signal.

We were really proud to be there. More than 20 Elementals went to this event. Can’t wait for next year!

Written by the science curation team, with contribution from Fanny Riols and Rick Valenzano

The Science Curation Team is formed by Catherine Lefebvre and Rachel Samson. Our role is to help Element AI’s research activities shine, and support our researchers stay abreast of the literature and activities happening in the larger AI community. Stay tuned for more coverage! Feel free to reach out; we are pretty chatty when it comes to AI.

--

--