Can you link to the extra resources mentioned in the DLND somewhere?
Tahsin Mayeesha
1

Here is a list of links, I personally found most useful from the Deep Learning Foundations course

I didn’t read all of them from A to Z, rather I read either a complete article or just a small part (e.g. a chapter from Deep Learning Book) if I didn’t understand some concept. The best explanations for me are from Andrej Karpathy.

Covers all main concepts of Deep Learning:
http://www.deeplearningbook.org/ (Deep Learning Book)

Udacity Deep Learning by Google (very good introduction to Deep Learning from Google):
https://www.udacity.com/course/deep-learning--ud730

Stanford CS231n: Convolutional Neural Networks for Visual Recognition, 15 lectures:
https://www.youtube.com/playlist?list=PLwQyV9I_3POsyBPRNUU_ryNfXzgfkiw2p

RL Course by David Silver (DeepMind), 10 lectures:
https://www.youtube.com/playlist?list=PLzuuYNsE1EZAXYR4FJ75jcJseBmo4KQ9-

Here you can find all papers to neural networks (a lot of theoretical stuff, but very useful):
https://arxiv.org/

Understanding backpropogation (backbone of neural networks):
https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b

Introduction to TensorFlow:
https://www.tensorflow.org/get_started/mnist/beginners

Convolutional Neural Networks:
https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/
https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/

Recurrent Neural Networks:
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
https://r2rt.com/recurrent-neural-networks-in-tensorflow-i.html
https://medium.com/@shiyan/understanding-lstm-and-its-diagrams-37e2f46f1714
http://mourafiq.com/2016/05/15/predicting-sequences-using-rnn-in-tensorflow.html
http://karpathy.github.io/2015/05/21/rnn-effectiveness/

Language translation, sequence-to-sequence, NLP:
https://medium.com/@ageitgey/machine-learning-is-fun-part-5-language-translation-with-deep-learning-and-the-magic-of-sequences-2ace0acca0aa
https://www.tensorflow.org/tutorials/seq2seq
https://www.youtube.com/watch?v=G5RY_SUJih4 (talk from Quoc Le, Google)
http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/

Reinforcement learning:
https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0
https://github.com/devsisters/DQN-tensorflow
https://medium.com/@tuzzer/cart-pole-balancing-with-q-learning-b54c6068d947

Autoencoders:
https://jaan.io/what-is-variational-autoencoder-vae-tutorial/
http://kvfrans.com/variational-autoencoders-explained/
http://blog.fastforwardlabs.com/2016/08/22/under-the-hood-of-the-variational-autoencoder-in.html
https://jmetzen.github.io/2015-11-27/vae.html

Generative Adversarial Networks:
http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/
https://blog.openai.com/generative-models/
http://cs.stanford.edu/people/karpathy/gan/
https://medium.com/@awjuliani/generative-adversarial-networks-explained-with-a-classic-spongebob-squarepants-episode-54deab2fce39

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.