PyTorch 2.0 release explained

Mazi Boustani
5 min readDec 9, 2022

One line of code to add, PyTorch 2.0 gives a speedup between 1.5x and 2.x in training models!

Image by this source and edited by the author

Over the last few years, the PyTorch team has innovated and iterated from PyTorch 1.0 to the most recent 1.13 and moved to the newly formed PyTorch Foundation, part of the Linux Foundation.

This new version will help to accelerate ML training and development and maintain backward compatibility, change one line of code and notice a faster response.

“We expect that with PyTorch 2, people will change the way they use PyTorch day-to-day”

“Data scientists will be able to do with PyTorch 2.x the same things that they did with 1.x, but they can do them faster and at a larger scale”

— Soumith Chintala

The goals for PyTorch 2.0 were the following:

  • How to get 30%+ training speedups, and lower memory usage without any changes to code or workflow
  • How to have an easier-to-write backend for PyTorch, decompose 2000+ operators to around 250 operators
  • How to get State of the Art distributed capabilities
  • How to substantially…

--

--