PyTorch v/s TensorFlow - Which Is The Better Framework?

Sayantini Deb
Edureka
Published in
7 min readOct 18, 2018
PyTorch vs TensorFlow — Edureka

This comparison article on PyTorch v/s TensorFlow is intended to be useful for anyone considering starting a new project, making the switch from one Deep Learning framework or learning to other. The focus is basically on programmability and flexibility when setting up the components of the training and deployment of the Deep Learning stack.

Let’s look at the factors we will be using for the comparison:

  • Ramp-Up Time
  • Graph Construction And Debugging
  • Coverage
  • Serialization
  • Deployment
  • Documentation
  • Device Management
  • Custom Extensions

So let the battle begin!

I will start this article by comparing both the frameworks on the basis of Ramp-Up Time.

Ramp-Up Time:

PyTorch is basically exploited NumPy with the ability to make use of the Graphics card.

Since something as simple at NumPy is the pre-requisite, this makes PyTorch very easy to learn and grasp.

With Tensorflow, the major thing as we all know it is that the graph is compiled first and then we have the actual graph output.

So where is the dynamism here? Also, TensorFlow has the dependency where the compiled code is run using the TensorFlow Execution Engine. Well, for me, the lesser dependencies the better overall.

Back to PyTorch, the code is well known to execute at lightning fast speeds and turns out to be very efficient overall and here you will not require extra concepts to learn.

With TensorFlow, we need concepts such as Variable scoping, placeholders and sessions. This also leads to more boilerplate code, which I’m sure none of the programmers here like.

So in my opinion, PyTorch wins this one!

Graph Construction And Debugging:

Beginning with PyTorch, the clear advantage is the dynamic nature of the entire process of creating a graph.

The graphs can be built up by interpreting the line of code that corresponds to that particular aspect of the graph.

So this is entirely built on run-time and I like it a lot for this.

With TensorFlow, the construction is static and the graphs need to go through compilation and then running on the execution engine that I previously mentioned.

The PyTorch code makes our lives that much easier because PDB can be used. Again, by making use of the standard python debugger we will not require learning to make use of another debugger from scratch.

Well with TensorFlow, you need to put in the little extra effort. There are 2 options to debug:

  • You will need to learn the TF debugger.
  • Request the variables you want to inspect from the session.

Well, PyTorch wins this one as well!

Coverage:

Well certain operations like:

1. Flipping a tensor along a dimension

2. Checking a tensor for NaN and infinity

3. Fast Fourier transforms supported

are supported by TensorFlow natively.

We also have the contrib package that we can use for the creation of more models.

This allows support for the use of higher-level functionality and gives you a wide spectrum of options to work with.

Well with PyTorch, as of now it has fewer features implemented but I am sure the gap will be bridged real soon due to all the attention PyTorch is attracting.

However, it is not as popular as TensorFlow among freelancers and learners. Well, this is subjective but it is what it is guys!

TensorFlow nailed it in this round!

Serialization:

Well, its no surprise that saving and loading models are fairly simple with both the frameworks.

PyTorch has a simple API. The API can either save all the weights of a model or pickle the entire class if you may.

However, the major advantage of TensorFlow is that the entire graph can be saved as a protocol buffer and yes this includes parameters and operations as well.

The Graph then can be loaded in other supported languages such as C++ or Java based on the requirement.

This is critical for deployment stacks where Python is not an option. Also, this can be useful when you change the model source code but want to be able to run old models.

Well, it is as clear as day, TensorFlow got this one!

Deployment:

For small-scale server-side deployments, both frameworks are easy to wrap in e.g. a Flask web server.

For mobile and embedded deployments, TensorFlow works really well. This is more than what can be said of most other deep learning frameworks including PyTorch.

Deploying to Android or iOS does require a non-trivial amount of work in TensorFlow.

You don’t have to rewrite the entire inference portion of your model in Java or C++.

Other than performance, one of the noticeable features of TensorFlow Serving is that models can be hot-swapped easily without bringing the service down.

I think I will give it to TensorFlow for this round as well!

Documentation:

Well, it is needless to say that I have found everything I need in the official documentation of both the frameworks.

The Python APIs are well documented and there are enough examples and tutorials to learn either framework.

But one tiny thing that grabbed my attention is that the PyTorch C library is mostly undocumented.

However, this only matters when writing a custom C extension and perhaps if contributing to the software overall.

To sum it up, I can say we’re stuck up with a tie here guys!

However, if you think you lean towards something, head down to the comments section and express your views. Let’s engage there!

Device Management:

Device management in TensorFlow is a breeze — You don’t have to specify anything since the defaults are set well.

For example, TensorFlow automatically assumes you want to run on the GPU if one is available.

In PyTorch, you must explicitly move everything onto the device even if CUDA is enabled.

The only downside with TensorFlow device management is that by default it consumes all the memory on all available GPUs even if only one is being used.

With PyTorch, I’ve found that the code needs more frequent checks for CUDA availability and more explicit device management. This is especially the case when writing code that should be able to run on both the CPU and GPU.

An easy win for TensorFlow here!

Custom Extensions:

Moving on, last but not the least I have picked out custom extensions for you guys.

Building or binding custom extensions written in C, C++ or CUDA is doable with both frameworks.

TensorFlow requires more boilerplate code though is arguably cleaner for supporting multiple types and devices.

In PyTorch however, you simply write an interface and corresponding implementation for each of the CPU and GPU versions.

Compiling the extension is also straight-forward with both frameworks and doesn’t require downloading any headers or source code outside of what’s included with the pip installation.

And PyTorch has the upper hand for this!

Conclusion:

Well, to be optimistic, I would say PyTorch and TensorFlow are similar and I would leave it at a tie.

But, in my personal opinion, I would prefer PyTorch over TensorFlow (in the ratio of 65% over 35%)

However, this doesn’t PyTorch is better!

At the end of the day, it comes down to what you would like to code with and what your organization requires!

I use PyTorch at home but TensorFlow at work!

I personally believe that both TensorFlow and PyTorch will revolutionize all aspects of Deep Learning ranging from Virtual Assistance all the way till driving you around town. It will be easy and subtle and have a big impact on Deep Learning and all the users!

I hope you have enjoyed my comparison article. If you wish to check out more articles on the market’s most trending technologies like Python, DevOps, Ethical Hacking, then you can refer to Edureka’s official site.

Do look out for other articles in this series which will explain the various other aspects of Deep Learning.

1. TensorFlow Tutorial

2. PyTorch Tutorial

3. Perceptron learning Algorithm

4. Neural Network Tutorial

5. What is Backpropagation?

6. Convolutional Neural Networks

7. Capsule Neural Networks

8. Recurrent Neural Networks

9. Autoencoders Tutorial

10. Restricted Boltzmann Machine Tutorial

11. Object Detection in TensorFlow

12. Deep Learning With Python

13. Artificial Intelligence Tutorial

14. TensorFlow Image Classification

15. Artificial Intelligence Applications

16. How to Become an Artificial Intelligence Engineer?

17. Q Learning

18. Apriori Algorithm

19. Markov Chains With Python

20. Artificial Intelligence Algorithms

21. Best Laptops for Machine Learning

22. Top 12 Artificial Intelligence Tools

23. Artificial Intelligence (AI) Interview Questions

24. Theano vs TensorFlow

25. What Is A Neural Network?

26. Pattern Recognition

27. Alpha Beta Pruning in Artificial Intelligence

Originally published at www.edureka.co on October 18, 2018.

--

--

Sayantini Deb
Edureka

A Data Science Enthusiast and passionate blogger on Technologies like Artificial Intelligence, Deep Learning and TensorFlow.