The number of PyTorch ecosystem projects that help machine learning researchers and developers to be more efficient has been growing rapidly for the past two years.
Torchbearer, a high-level abstraction launched in 2018, has garnered a niche following with great documentation and neat features that make AI training easier.
PyTorch Lightning is a recent project launched in mid-2019, which has gained over 104k downloads and over 3,700+ GitHub stars just in the last 7 months since its launch. Lightning has a community of over 84 contributors who ensure the latest features are implemented, tested and well documented. Moreover, Lightning has a strict contributor guidelines which ensure high-quality code and full backward compatibility.
The founders of Torchbearer, and PyTorch Lightning, have come to find that both perspectives of building different frameworks are hugely beneficial to the end-user experience and have thus decided to unite forces to continue working on PyTorch Lightning.
The end goal of both teams is to build the best research framework for AI researchers using PyTorch.
The founders of torchbearer are joining the core group of contributors at PyTorch Lightning and will direct their efforts to PyTorch Lightning development. It is an incredible opportunity for two projects to unite. Both founders will bring their wealth of PyTorch experience to help lead the development of PyTorch Lightning. This merge gets us one step closer to the vision of building the go-to deep learning framework for researchers and production teams who love PyTorch!
We are happy to welcome all the torchbearer users to the lightning community! The torchbearer repository will no longer be active after February 31st. Any issues or feature requests can be directed to the PyTorch Lightning GitHub issues.
Why do we need a PyTorch abstraction?
Most researchers or production teams using PyTorch are looking to publish papers, experiment with new approaches or put models into production. However, the complexities of training deep learning models leaves a lot of “magic” up to the researcher, such as how to do early stopping, gradient checkpointing, train with half-precision, train a single model on hundreds of GPUs, etc…
PyTorch Lightning decouples the engineering (the blue), from the science (the red). It automates and tests the engineering so that researchers can focus on the science. This dramatically increases research speed, makes code standardized, maintainable and scalable.