Recommended this week
- The 9 Deep Learning Papers You Need To Know About
- Using Deep Learning to Reconstruct High-Resolution Audio
- Understand PyTorch code in 10 minutes
- “Attention Is All You Need” from Google Folk achieves a single-model state-of-the-art BLEU score of 41.0.
- “FreezeOut: Accelerate Training by Progressively Freezing Layers” yields 20% speedup without loss of accuracy for ResNet, 20% speedup with 3% loss in accuracy for DenseNets.
- “YellowFin and the Art of Momentum Tuning” — introduces an automatic tuner for both momentum and learning rate in SGD. Converges in fewer iterations than Adam on large ResNet and LSTM models, with a speedup up to 2.8x in synchronous and 2.7x in asynchronous settings.
- “Noisy Networks for Exploration” — DeepMind guys introduced NoisyNet: general RL exploration method that achieves a superhuman performance in a wide range of Atari games.
- “Stick-Breaking Variational Autoencoders” — Published as a conference paper at ICLR 2017.
- “Natural Language Processing with Deep Learning” Stanford Lectures (winter 2017). Instructed by Chris Manning and Richard Socher.
- “Generative Adversarial Networks for Beginners” a practical tutorial accompanied with article and blog.
- “Deep Reinforcement Learning” Course Lecture Videos from UC Berkeley
- “DeepMind’s AI Learns Superhuman Relational Reasoning” a two-minute overview of the paper.
Thanks for reading! If you find the content valuable, you will find more on our Twitter @ML_Review.
Please tell us if you think we forgot anything interesting or any ideas how we can be more useful for you.
See you next week!