SyncedReview
Published in

SyncedReview

RE•WORK Deep Learning Summit in Toronto

RE•WORK organized the second Canadian edition of its RE•WORK Global DL Summit Series in Toronto on October 25–26. The event attracted over 600 attendees from more than 20 countries who joined conversations with leading AI and Deep Learning experts. Keynote speakers included “The Godfather of AI” Professor Geoffrey Hinton, Google Brain AI Resident Sara Hooker, Google Brain Research Scientist Shane Gu and many others, who covered topics such as neural networks, image analysis, reinforcement learning, NLP, and speech & pattern recognition.

The RE•WORK Deep Learning Summit in Toronto followed last year’s successful gathering some 500 km northeast in Montreal, another Canadian AI hub. The downtown Metro Toronto Convention Centre was abuzz with academics, investors, industry experts, and a wide range of others with an interest in or passion for Deep Learning and AI. The concurrent AI for Government Summit at the same location reaffirmed Canadian government commitments to the tech and the AI community. In between session at both events, speakers and attendees joined workshops and seized networking opportunities to discuss their work and share industry insights.

Geoffrey Hinton — “I wish I knew this stuff was going to work!”

Professor Hinton reviewed previous studies on ensemble learning, distillation, and label smoothing techniques. He explained that during training a big model it can become too confident; and that it is a challenge to prevent the model being too confident while still allowing distillation. Initially, he thought the solution would be to “penalize the training output distribution if the entropy of the distribution is lower than some threshold.” However, that did not help. Hinton concluded a better idea is to penalize the entropies of the output distributions if the total entropy for a mini-batch is lower than some threshold. Hinton said the idea works on the MNIST database (Modified National Institute of Standards and Technology) and the CIFAR — 10 dataset (Canadian Institute For Advanced Research). Hinton added that Artificial Intelligence Resident at Google Brain Rafael Müller has worked on the proposal.

Hinton concluded with a visual of takeaways from his talk:

  • When extracting knowledge from data we can use very big models or very big ensembles of models that are much too cumbersome to deploy
  • If we can extract the knowledge from the data it is quite easy to distill nearly all of it into a much smaller model for deployment
  • When training a big model, it helps if we prevent the model from becoming too certain
  • Label smoothing is an easy way to do this but it screws up distillation
  • Forcing the sum of the output entropies on each mini-batch to be above a threshold makes the big model generalize well and also allows distillation to work well

When a high school student in the audience asked Hinton what he wished he’d known when he was younger, the scientist replied with his characteristic humour: “In 1986, I wish I knew this stuff was going to work!” He added that something he’d like to know right now is “whether the brain uses backpropagation.”

What’s Next in Deep Learning and AI?

The event’s opening session was What’s Next in Deep Learning & AI, which explored deep learning theories and applications. Audience members and experts alike expressed concerns about the explainability and interpretability of AI methods. David Cox from MIT-IBM Watson AI lab said the foundation of a lab is to bridge the gap between academia and industry, and that when implementing AI technologies “explainability is important because if we don’t worry about it now, then the government will worry about it for us.” Cox took the GDPR (EU General Data Protection Regulation) as an example. Following on Cox’s remarks Google Brain AI Resident Sara Hooker stressed it is important the interpretability method itself is reliable, otherwise it may produce “misinformation at best, and can potentially cause more harm.”

Deep Learning Optimization

To optimize the deep learning performance, researchers always seek to enhance robustness and speed. McGill University Professor Adam Oberman said he has been using mathematical tools such as adversarial training augmented with Lipschitz Regularization to reduce adversarial error. Professor Graham Taylor from the University of Guelph presented binary neural networks are also “robust to certain forms of adversarial attacks.”

A Networking Mixer on the evening of October 25 offered attendees the opportunity to mix and talk at a local bar.

Startup Session — Launch Innovations

NetraDyne’s CTO David Julian kicked off the second day’s talks by introducing Autonomously Generated HD Maps, which enable autonomous vehicles to “see” the road. Meanwhile, agritech startup AgriLogicAI’s Tzvi Aviv said convolutional neural networks can enhance predictions for farming, provide information to insurance companies regarding the financial sustainability of an insured farm, and perform assessments of environmental risk. Drug discovery startup Phenomic AI shared its experiences using deep learning to better interpret high content screenings.

The summit attracted professionals from a wide range of fields including financial services, agriculture, retail, network operators and so on. There also were plenty of young volunteers on site, like the wide-eyed high school students who sat in the front row for many presentations. Meanwhile in the hallways a topic du jour was the nexus between artificial intelligence research and development and industrial applications. And, of course: How the tech will change the future.

Journalist: Fangyu Cai | Editor: Michael Sarazen

Follow us on Twitter @Synced_Global for daily AI news

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

--

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Recommended from Medium

How Artificial Intelligence Help Build Smart Cities

AI Can Make Fund Managers Superhuman

‘Active Neural Slam’ Uses Classical and Learning Approaches to Explore 3D Spaces

Quantum Advantage Prize Rules and Criteria

Hold On AI, Aren’t You Forgetting Something?

Tips for Launching a Successful AI Initiative

AI Will Neither Save Us Nor Destroy Us

Is Your Company Digitally and Socially Responsible? 7 Questions to Ask in the Era of AI Bias

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Synced

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

More from Medium

Princeton U’s DataMUX Enables DNNs to Simultaneously and Accurately Process up to 40 Input…

The Sequence Scope: The ML Hardware Virtualization Layer

NVIDIA’s Impressive GAN Applications

Ithaca Paper Published in Nature: The First DNN Designed for Textual Restoration and Geographical…