Starting with Deep Learning? Know the important resources

Vibhor Gautam
Analytics Vidhya
Published in
4 min readMay 22, 2021

For beginners, deep learning can be a complex and intimidating area. However, if you try to understand deep learning concepts, you’ll come across terms like hidden layers, convolutional neural networks, and backpropagation.

It’s not easy — particularly if you take an ad hoc approach to learning and don’t go over the fundamentals first. You’ll be lost in a foreign city like a tourist who hasn’t brought a map!

If I were to begin learning deep learning today, this is how I would start.

Deep Learning Specialization

Surprisingly, Andrew Ng’s deep learning specialization is, in my view, the best place to start.
Andrew Ng has a natural talent for teaching, and he does an excellent job of progressing from the fundamentals to image and text processing using deep learning. Therefore, the following courses are included in this specialization:

  • Neural Networks and Deep Learning
  • Improving Deep Neural Networks
  • Structuring Machine Learning Projects
  • Convolutional Neural Networks
  • Sequence Models

Above all, I admire how he teaches you how to refine and organize your projects before moving on to more complex architectures. This bottom-up approach is really approachable for beginners, in my opinion. This course also does an outstanding job of introducing you to the principle of network focus, which is a crucial concept for modern architectures.

One of the course’s drawbacks, in my view, is that it is taught in TensorFlow. TensorFlow has changed, so this isn’t a major disadvantage, but I prefer PyTorch. I completed all of my homework in PyTorch, and I suggest that you do the same. This will not only give you exposure to TensorFlow and PyTorch code, but it will also ensure that you grasp the concepts because you won’t be able to rely on any of the given code.

Practical Deep Learning for Coders

Jeremy Howard and Rachel Thomas have made a significant contribution to making deep learning more available.

They take a top-to-bottom approach, which means they start at the highest level and work their way down. If you have no prior experience with deep learning, I find this approach to be a little perplexing. As a result, I suggest that you begin with Andrew Ng’s course. You would think that since this is just an introductory course, it isn’t worth taking in addition to Andrew Ng’s, but I find the teaching style and subject matter to be very different.

Although you will undoubtedly receive subjects that are a recap from Andrew’s course, I believe that seeing certain subjects in a new light is extremely beneficial. This course would also cover deep learning in a practical sense, focusing on topics that they have found useful when applying deep learning to real-world problems that do not need Google-level computation.

The course employs a combination of PyTorch and fastai, a library built on top of PyTorch. Fastai is a high-level library that allows you to use industry best practises and cutting-edge models with only a few lines of code. While that is nice, I still prefer writing straight PyTorch code because it allows you to better understand what is going on beneath the surface and is usually not too difficult.

If you like this course, you might want to consider continuing on to the second part, which is more advanced. The course essentially guides you through the process of creating the fastai library, which necessitates a deeper dive into the code and fundamentals.

Implement a Paper

You should now have a firm understanding of the foundations of deep learning. You’ve been trained by two of the best teachers in the world in two separate ways.

I believe that most people get tripped up by deep learning because they don’t spend enough time learning the basics. I’m not really talking about the algorithms’ mathematics. I’m referring to the fundamental ideas, components, and problems. Go back and revisit the concepts from the previous two courses if you don’t think you completely understood them.

If you’re in a good mood, go ahead and implement a paper.

This will appear frightening, but it is almost definitely not as frightening as you believe. One of the reasons I like PyTorch so much is that I can almost copy the architecture from a document and type it into Python with PyTorch and it works.

GoogLeNet is one paper that I would recommend. This paper is a little older, so it won’t use any unfamiliar building blocks, but it’s well written, and I don’t think it’s covered in any of the courses in great detail.

Sit down with your chosen paper and simply read it until you understand it, then convert it to code and test it on a standard dataset to ensure that your implementation succeeded. This will not only help you understand the document, but it will also give you the courage to take on other papers without fear.

Choose Your Own Adventure

The final step in this process is to keep looking for new papers and putting them into practise. Deep learning is such a fast-moving area that staying up to date on state-of-the-art architectures would require you to be comfortable reading and implementing articles.

Don’t be afraid to sit down and digest a paper with the courage you gained from implementing the first one. That, in my opinion, is the best way to keep learning. Others may have written extremely helpful blog posts to help you dissect the most impactful articles. Make the most of them!

If you can’t find one, make one once you’ve figured it out!

Then, over time, you’ll gradually build on your fundamental deep learning skills before others regard you as an expert and are impressed by your breadth of knowledge.

--

--