Machine Learning Weekly Review №1

Source of latest credible papers, videos and projects on machine learning for scientists and engineers.

Recommended this week


  1. The 9 Deep Learning Papers You Need To Know About
  2. Using Deep Learning to Reconstruct High-Resolution Audio
  3. Understand PyTorch code in 10 minutes


  1. “Attention Is All You Need” from Google Folk achieves a single-model state-of-the-art BLEU score of 41.0.
  2. “FreezeOut: Accelerate Training by Progressively Freezing Layers” yields 20% speedup without loss of accuracy for ResNet, 20% speedup with 3% loss in accuracy for DenseNets.
  3. “YellowFin and the Art of Momentum Tuning” — introduces an automatic tuner for both momentum and learning rate in SGD. Converges in fewer iterations than Adam on large ResNet and LSTM models, with a speedup up to 2.8x in synchronous and 2.7x in asynchronous settings.
  4. “Noisy Networks for Exploration” — DeepMind guys introduced NoisyNet: general RL exploration method that achieves a superhuman performance in a wide range of Atari games.
  5. “Stick-Breaking Variational Autoencoders” — Published as a conference paper at ICLR 2017.


  1. “Natural Language Processing with Deep Learning” Stanford Lectures (winter 2017). Instructed by Chris Manning and Richard Socher.
  2. “Generative Adversarial Networks for Beginners” a practical tutorial accompanied with article and blog.
  3. “Deep Reinforcement Learning” Course Lecture Videos from UC Berkeley
  4. “DeepMind’s AI Learns Superhuman Relational Reasoning” a two-minute overview of the paper.


  1. Tensor2Tensor — system for training deep learning models in TensorFlow from Google Brain team. Includes library of state-of-art datasets and models.
  2. Keras-vis — neural network visualization toolkit for keras.
  3. BayesSearchCV — notebook showcasing a new Bayesian hyperparameter search in scikit-optimize.

Thanks for reading! If you find the content valuable, you will find more on our Twitter @ML_Review.

Please tell us if you think we forgot anything interesting or any ideas how we can be more useful for you.

See you next week!