Up to Speed on Deep Learning: June Update

By Isaac Madan

At the end of April, we published an article on getting up to speed on deep learning, which included 20+ resources to catch up on rapid advancements in the field. Much has happened since then, so we thought we’d pull together a few of the excellent resources that have emerged this month in June. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Announcements

Generative models by OpenAI. The team at OpenAI share five new projects that enhance or use generative models, a branch of unsupervised machine learning techniques:

  • Improved techniques for training generative adversarial networks (paper).
  • Improved techniques for training variational auto-encoders (paper).
  • Improved techniques for interpretable representation learning (paper).
  • Curiosity-driven exploration in deep reinforcement learning (paper).
  • New approach for imitation learning (paper).

Facebook introduces DeepText, its deep learning engine that understands textual content on Facebook with near-human accuracy and at a speed of several thousand posts per second, in more than 20 languages.

Google DeepMind learns to play Montezuma’s Revenge via intrinsic motivation techniques (video). The game requires forward planning. Read the paper, Unifying Count-Based Exploration and Intrinsic Motivation, here.

NVIDIA announces its GPU Ventures Program in which it “provides marketing, operational, financial and other support to young ambitious companies founding their businesses around NVIDIA technologies.” They plan to make $500K to $5M investments in these startups. Consider applying if you’re working on a deep learning startup.

DARPA announces its Data-Driven Discovery of Models program, which is intended to help non-experts build their own models using automated tools that facilitate data science. In effect, leveraging machine learning for machine learning.

Explanation & Review

Neural Network Architectures by Eugenio Culurciello. The history of neural network design over the past few years to help us better craft neural network architectures in the future.

Deep Learning Trends @ ICLR 2016 by Tomasz Malisiewicz. Review of the 2016 International Conference of Learning Representations (ICLR), highlighting the important trends and papers that emerged.

Most Cited Deep Learning Papers by Terry Um. A curated list the most cited deep learning papers since 2010, interesting newly released papers, earlier classics from 1997–2009, and distinguished researchers.

Deep Reinforcement Learning by Andrej Karpathy. An overview of reinforcement learning, explaining the field in the context of the game of Pong, a simple reinforcement learning task.

Tip/Tricks in Deep Neural Networks by Xiu-Shen Wei. Must know implementation details for building and training deep neural networks. Xui-Shen explains the following important concepts: data augmentation; pre-processing of images; initializations of networks; some tips during training; selection of activation functions; diverse regularizations; some insights found from figures; and, methods of ensemble multiple deep networks.


By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.