PyTorch — A Savior Deep Learning Framework

amirsina torfi
Machine Learning Mindset
7 min readNov 29, 2019

--

There are many Deep Learning frameworks out there, such as PyTorch, TensorFlow, Keras, to name a few. Are you stuck in picking a Deep Learning framework? You think you may choose one, and after six months, you regret why you did not choose another one? Admittedly, it’s not an easy choice. Therefore, there is a need for clarification. I am going to share with you why I believe PyTorch is currently the best choice and how it saved a lot of my time.

A paradox is that you may find that almost the majority of my successful open-source works are implemented using TensorFlow. I developed the TensorFlow Online Course, which is currently one of the top-20 TensorFlow GitHub projects worldwide. So even with that background, I recommend PyTorch. You may wonder, “why on earth?” Well, I am not a hypocrite. I recently picked PyTorch over TensorFlow. In this article, I am going to discuss some of the most important PyTorch advantages which lead me to throw away a famous framework such as TensorFlow.

An Introduction to PyTorch

Well, at the very first, I should say PyTorch is a Machine Learning framework. We can categorize Deep Learning under the umbrella of Machine Learning, therefore, I like to say PyTorch is a Deep Learning framework as well. It facilitates Deep Learning more than any other tool! Of course, PyTorch is a Deep Learning framework, not just because of the reasoning that I mentioned, because it is commonly used for Deep Learning applications.

Although there are numerous other famous Deep Learning frameworks such as TensorFlow, PyTorch usage was drastically increased recently due to its ease of use. As of now, the increasing interest in using PyTorch is more than any other deep learning framework due to many reasons. I start which a quote from the official PyTorch blog:

PyTorch continues to gain momentum because of its focus on meeting the needs of researchers, its streamlined workflow for production use, and most of all because of the enthusiastic support it has received from the AI community.

I personally disagree with some of those claims! I am not saying they are not valid. However, they are not unique reasons for PyTorch standing at the top of the competition. In terms of (1) the enthusiastic support it has received from the AI community and (2) its streamlined workflow for production use, TensorFlow might even be better as of now! However, yes, PyTorch definitely serves the researchers far better than TensorFlow and other frameworks, again, because of its ease of use.

Why PyTorch?

Answering this question is quite essential as it’s somehow totally based on individuals’ experiences. Although there are aspects that no one may deny. I talk about the reasons that users commonly declare and may argue with some of those.

You code with Python in PyTorch: Yes, it is a crucial aspect of that if you compare it with some weird frameworks that do not use Python. An example of which is Torch. But, if you compare it with TensorFlow or Keras, you do not see any advantages. In fact, many different frameworks use Python! So it is not a unique advantage!

Dynamic Graph Computation: Definitely a HUGE PLUS! Building deep learning stuff on top of dynamic graphs allows us to run the workflow and compute variables instantly, which is great for debugging! Compared to TensorFlow, this characteristic of PyTorchsaved my eyes! However, TensorFlow 2.0 comes with native eager execution, which supposes to be similar to PyTorch. So until very recently, it was a unique advantage. To further emphasize this aspect, I would like to provide a quote:

Because Pytorch allowed us, and our students, to use all of the flexibility and capability of regular python code to build and train neural networks, we were able to tackle a much wider range of problems. An additional benefit of Pytorch is that it allowed us to give our students a much more in-depth understanding of what was going on in each algorithm that we covered. With a static computation graph library like Tensorflow, once you have declaratively expressed your computation, you send it off to the GPU where it gets handled like a black box. But with a dynamic approach, you can fully dive into every level of the computation, and see exactly what is going on. We believe that the best way to learn deep learning is through coding and experiments, so the dynamic approach is exactly what we need for our students.

Jeremy Howard at Fast.ai [Link]

Faster in Training: Despite some available evidence, you do not need to believe this! There is no absolute proof to show that. Perhaps in some setups, PyTorch is doing better than the others, BUT, we cannot say that for sure! There is a fair empirical study to showcase this. I personally conducted some experiments using the ResNet50, VGG16, and Inception-v3 models. The setup is as below:

  • For each model, I conducted the training for 10 runs and reported the mean for the images processed per second.
  • I used one GeForce GTX 1080 Ti GPU.

My results are as below:

Distributed Training: In PyTorch, there is native support for asynchronous execution of the operation, which is a thousand times easier than TensorFlow. Of course, you can do the same in TensorFlow, BUT, it is damn hard, at least for now. Simply speaking, this distribution training makes things very fast. In PyTorch, you can implement it in two lines of code as below:

Excellent documentation and tutorials: As oppose to TensorFlow, which has awful documentation, you can basically learn almost everything quickly and from scratch using PyTorch official tutorials. This is a great advantage. I personally do NOT care which framework has more features. What I care about is which one I can learn faster and do better with. Excellent, insightful documentation is what I needed, and I got from PyTorch. For example, refer to the article “AUTOGRAD: AUTOMATIC DIFFERENTIATION” to realize how easily you can learn rather complicated stuff.

The Other Side of the Coin

I talked a lot about how great the PyTorch is. But, not so fast! Definitely, PyTorch is not a cure for everything (so-called a panacea!). Assuming you are a Deep Learning practitioner or expert. Before implementing stuff, you need to learn about it more. You may agree with me by saying, “the best way of learning is learning by doing!” One of the best practices in that regard is to read and try to reproduce the works that others did. Thanks to the open-source community, it is very likely that you find the majority of the things just by searching Google and Specially GitHub. And we are talking about FREE stuff. BUT, how this is related to the previous statement of “not so fast?”

Well, the community of open-source developers is huge, and at this moment, the majority of them use TensorFlow. Even if the majority change their minds, still TensorFlow will possibly never fade away! So I wanted to emphasize the below fact:

I am very biased with PyTorch. BUT, No matter what framework you pick, you need to know both PyTorch, TensorFlow at some level. At the very least, you understand both. And, I am assuming you would like to be an expert in Deep Learning so, the others pay for your expertise. Otherwise, you do not need to think about any of these stuff!

Conclusion

You may think the conclusion of this article should help to pick PyTorch as the best Deep Learning framework. Well, I guess so. BUT, it is NOT the whole story. I talked about my experiences, and I am about to share my personal views. If you want to learn and implement in an easy manner, PyTorch is your savior. But, a lot of people use TensorFlow and you need to be able to learn what they are doing. So the bad news is, you cannot avoid learning TensorFlow. But the good news is you can avoid TensorFlow when you want to implement stuff which is the painful part. PyTorch will save you time! Stick to it, unless you are an expert in BOTH PyTorch and TensorFlow and seriously believe you are more comfortable with TensorFlow. However, it is very unlikely that you are an expert in both and still like TensorFlow more!

--

--