Deep Hunt — Issue #60
Here are the highlights from an eventful week — Germany plans 3 billion euros in AI investment; How to Teach Artificial Intelligence Some Common Sense; Google open sources BigGAN generators; ImageNet/ResNet-50 Training in 224 Seconds
News
Germany plans 3 billion in AI investment
German govt wants to promote the use of AI applications in business within a framework that protects fundamental social values and individual rights and are planning to invest 3 billion euros!
Standard Cognition raises $40M to replace retailers’ cashiers with cameras
Just an year old, Standard Cognition competes with Amazon Go in making the shopping experience seamless. Unlike Amazon Go, they deploy overhead cameras that identify you by shape and movement, not facial recognition.
Articles
How to Teach Artificial Intelligence Some Common Sense
This WIRED article explores the current limits, and current power of deep learning and the challenges of making AI truly able to reason.
These Animated AI Bots Learned to Dress Themselves, Awkwardly
A study by researchers at Georgia Institute of Technology shows creating systems that can perform more mundane tasks — such as dressing themselves — is proving to be an enormous challenge as well.
Tutorials, Tools and Tips
Heads-up for Deploying Scikit-learn Models to Production: Quick Checklist
This is a quick checklist of important things one needs to keep in mind while pushing your scikit-learn models into production.
Here’s a list of papers in Deep Reinforcement Learning curated by the folks at OpenAI. It’s a great resource for someone looking to get started and it’s a lot of reading!
Large Scale GAN Training for High Fidelity Natural Image Synthesis
DeepMind has opensourced the BigGAN generators on TFHub. Dig in to explore the world of most impressive GAN samples generated yet.
Research
Gradient Descent Finds Global Minima of Deep Neural Networks
This research work has generated a lot of conversation among the researchers. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet).
ImageNet/ResNet-50 Training in 224 Seconds
By applying a two techniques - batch size control and 2D-Torus all-reduce this paper claims to have successfully trained ImageNet/ResNet-50 in 224 seconds without significant accuracy loss on ABCI cluster!
If you like what you are reading, please follow and recommend to your friends or give a shoutout on Twitter! I’m glad to hear your suggestions and recommendations @deephunt_in or in comments below!