Getting Up to Speed on Deep Learning
For good reason, deep learning is increasingly capturing mainstream attention. Just recently, on March 15th, Google DeepMind’s AlphaGo AI — technology based on deep neural networks — beat Lee Sedol, one of the world’s best Go players, in a professional Go match.
Behind the scenes, deep learning is an active, fast-paced research area that’s proliferating quickly among some of the world’s most innovative companies. We are asked frequently about our favorite resources to get up to speed on deep learning and follow its rapid developments. As such, we’ve outlined below some of our favorite resources. While certainly not comprehensive, there’s a lot here, and we’ll continue to update this list — if there’s something we should add, let us know.
Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville (2016). A comprehensive and in-depth book on machine learning and deep learning core concepts.
Course notes from Stanford CS 231N: Convolutional Neural Networks for Visual Recognition. This course is a deep dive into details of neural network architectures with a focus on learning end-to-end models for these tasks, particularly image classification. During the 10-week course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision.
Course notes from Stanford CS 224D: Deep Learning for Natural Language Processing. In this class, students will learn to understand, implement, train, debug, visualize and potentially invent their own neural network models for a variety of language understanding tasks.
Blogs, Papers, and Articles
Deep Learning in a Nutshell by Tim Dettmers, via NVidia (2015). These articles are digestible and do not rely heavily on math.
- Part 1: A gentle introduction to deep learning that covers core concepts and vocabulary.
- Part 2: History of deep learning and methods of training deep learning architectures quickly and efficiently.
- Part 3: Sequence learning with a focus on natural language processing.
Podcast with Yoshua Bengio: The Rise of Neural Networks and Deep Learning in Our Everyday Lives. An exciting overview of the power of neural networks as well as their current influence and future potential.
Deep learning reading list. A thorough list of academic survey papers on the subjects of reinforcement learning, computer vision, NLP & speech, disentangling factors, transfer learning, practical tricks, sparse coding, foundation theory, feedforward networks, large scale deep learning, recurrent networks, hyper parameters, optimization, and unsupervised feature learning.
Christopher Olah’s blog. Christopher has in-depth, well-explained articles with great visuals on neural networks, visualization, and convolutional neural networks.
Adrian Coyler’s blog. Adrian selects and reviews an interesting/influential/important paper from the world of CS every weekday morning.
Academic papers & presentations:
- Representation Learning: A Review and New Perspectives by Yoshua Bengio, Aaron Courville, and Pascal Vincent (2012). This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks.
- Deep Learning of Representations: Looking Forward by Yoshua Bengio (2013). This paper examines some of the challenges ahead in the field of deep learning as well as some forward-looking insight on research direction.
- Deep Learning Tutorial by Yann LeCun and Marc’Aurelio Ranzato (2013). Slides from a talk at the 2013 International Conference on Machine Learning in Atlanta. Filled with many great slides, diagrams, and charts that explain concepts visually.
- Deep Learning in Neural Networks: An Overview by Jurgen Schmidhuber (2014). A summary of 900 influential deep learning papers. Whew. Adrian summarizes it here.
- Natural Language Processing (almost) from Scratch by Collobert et al. (2009). This paper presents a multilayer neural network architecture that can handle a number of NLP tasks with both speed and accuracy.
- Practical recommendations for gradient-based training of deep architectures by Yoshua Bengio (2012). This paper provides a practical guide on how to select some of the most commonly used hyper parameters for deep learning models.
- TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems by Google Research (2015). How TensorFlow works.
Deep learning Google Group. Where deep learning enthusiasts and researchers hangout and share latest news.
Deep learning research groups. A list of many of the academic and industry labs focused on deep learning.
- International Conference on Learning Representations. May 2–4, 2016 in the Caribe Hilton, San Juan, Puerto Rico. Despite the importance of representation learning to machine learning and to application areas such as vision, speech, audio and NLP, there was no venue for researchers who share a common interest in this topic. The goal of ICLR has been to help fill this void. Yoshua Bengio & Yann Lecun are General Chairs.
- International Conference on Machine Learning. June 19-24, 2016 in New York City, NY. ICML is the leading international machine learning conference and is supported by the International Machine Learning Society (IMLS).
- Conference on Neural Information Processing Systems (NIPS). December 5–10, 2016 in Barcelona, Spain. A single-track machine learning and computational neuroscience conference that includes invited talks, demonstrations and oral and poster presentations of refereed papers.
- GPU Technology Conference (GTC). April 4–7, 2016 in San Jose, CA; there are others later throughout the year in other countries. Presented by NVIDIA, GTC is comprised of the annual conference, year-long webinar series, and workshops that connect the global community of developers, researchers, and scientists through unique educational and networking opportunities.
Deep Learning Frameworks in VentureBeat (2015). An overview of major deep learning libraries, as of December 2015.
TensorFlow neural network playground. Play with neural networks visually in your browser to get a feel for what they are and what they do.
TensorFlow tutorial. Google’s tutorial that explains TensorFlow and MNIST, as well as the basics of machine learning and deep learning networks. This is in Python.
Debugging neural networks by Russell Stewart. Neural networks are hard to debug and this affects the learning curve involved in implementing deep learning. Russell offers some great insight.
Theano. Numerical computation library for Python (faster and more mature than TensorFlow).
Lasagne. Lightweight Python library for deep learning (built on Theano).
Caffe. Deep learning framework.
Model Zoo. Pretrained Caffe models for a variety of tasks.
By Isaac Madan and David Dindi. Isaac is an investor at Venrock (email). David is a grad student at Stanford and TA for CS 224D, mentioned above (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.
Subscribe to our email newsletter here.
Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.