PyTorch Lightning 0.7.1 Release and Venture Funding

William Falcon
Mar 23, 2020 · 8 min read

The 0.7.1 release signals a new level of framework maturity. With major API changes behind us, this release paves the way to the major 1.0 milestone we aim to reach this year.

But before getting into all the juicy new features we have awesome news to share!

Venture Funding

PyTorch Lightning Github Repo

You may have noticed that Lightning is no longer in the “WilliamFalcon” Github repo. The scale at which Lightning is now being adopted and used across academia and industry means I can no longer treat this as a “side” project (which it hasn’t really been).

After releasing Lightning in March of 2019 and making it public in July, it quickly became obvious that a single person couldn’t support the appetite for new features from the hyper-engaged Lightning community. As a first step, I formed a group of core contributors who are now leading a lot of initiatives across the framework. This core group was quickly augmented by an extended community of over 113 contributors (of mostly PhDs), and it began to feel like my vision for Lightning to become a foundational part of everyone’s deep learning research code was becoming a reality!

Along with this burgeoning community has come rapidly growing interest from users in corporate and big academic lab settings, making it clear that we need to build a team working on new features and functionality full-time. As such, I’ve secured funding from fantastic venture capitalists to help take Lightning to the next level!

I’m excited about this funding because it allows Lightning to stay open-source while building an amazing full-time team to move even faster through new feature requests. The goal is to make Lightning the for academic labs, corporate labs and production teams around the world.

I’ll announce more details around the funding over the next few months, but in the meantime, I’m hiring for full-time research engineers.


If you’re just a generally awesome engineer or research scientist with deep learning experience, also reach out! We are always interested in learning more about what you are up to!

Thank you!

As we step into this new stage in the Lightning journey, I want to take the time to thank my research advisors Kyunghyun Cho and Yann LeCun for being super supportive and allowing me to focus a significant portion of my time on developing and improving Lightning.

Also thankful for my labmates at and () for being some of the first users to provide feedback and help make some of the critical first API changes that enable Lightning to be super powerful for researchers.

Special shout-out to FAIR colleagues, Stephen Roller who was instrumental to having the ddp2 mode in Lightning and Margaret Li who was an awesome pair programmer for many late nights at Facebook NY (check out their project, ParlAI). Also Tullie (a core lightning contributor) and Aaron from the fastMRI team which was one of the first teams at FAIR to adopt Lightning.

I also want to thank the amazing PyTorch team, Joseph Spisak, Woo Kim and Soumith Chintala for coaching and helping me flesh out the vision for Lightning.

Finally, I want to thank the Lightning community for continuing to add awesome features, especially the core contributors Ethan, Jeff, Jeremy, Jirka, Tullie, Matthew, and Nic who have led amazing initiatives across Lightning! For anyone in the community who is looking to transition to a full-time role, please reach out to me directly :)

What about open-source?

This is great news for the future of Lightning as an open-source project. The contributors and I have been maintaining the package on a part-time basis, but it has quickly become hard to keep up with the number of community requests without a full-time team. The funding will give us the ability to hire additional engineers and move through features faster — all while always remaining open-source.

We are incredibly excited to build out this team, pick up the pace of development, and make Lightning into all that we know it can be.

We’re also lucky to have you…

Contributor Types

As we expand the team, we want to highlight the different types of contributors roles for Lightning.

You submit PRs and generally help us find bugs / request new features.

This is the heart and soul of the project. This is the team that approves PRs and drives new features across the project. However, this team likely has full-time jobs or are Ph.D. students, and so can’t focus fulltime on new features.

: These members drive bigger features that require fulltime work to implement as they may span for multiple weeks. In addition, they are the link between corporate users and Lightning.


We’re lucky to have partnered with awesome investors that align with our values of continuing to grow an awesome community around Lightning and to develop best in class research tools for them.

Eventually, this funding will also enable us to provide the kind of support corporations require from an open-source project which unfortunately can’t be done without a full-time team. These things include tech support, special versioning requirements, etc — all things that companies require when adopting major frameworks like PyTorch Lightning.

Many open-source projects like Elasticsearch and GraphQL help companies in this way.

TPU Support

Perhaps one of the biggest features of this release is the support for TPUs. Without having to change your code you can now run the same Lightning code on CPU, GPU and TPU.

Check out this COLAB demo to test for yourself.

Thank you to the XLA team for helping make this possible, especially Davide Libenzi who was very helpful in integrating xla into lightning.


How many times have you wondered where in your code you may have lurking bottlenecks? Maybe loading data is slow? or is it the training step? Our core contributor Jeremy Jordan, added an amazing profiler to the framework.

With Lightning you can now enable a single flag to find out! Simply:

  1. set the profiler= True.

2. Train.

3. A print out like this one will generate at the end of training.

If you need even more advanced profiling, simply pass in a profiler object

Which will generate a more detailed trace

Check out the documentation for more details.


Although Lightning itself is just a sequence of structured callbacks, the community wanted a better way to encompass non-essential logic needed for training.

A callback is a small program that performs arbitrary functionality at different points in training. It’s a nice way to encapsulate shared functionality in a single class without polluting the research code.

In lightning we factor out deep learning code in 3 ways:

  1. Research code (Lightning Module)
  2. Engineering code (Trainer)
  3. Non-essential research code (Callbacks)

For example, you could have a callback that does a bunch of printing for you.

Or maybe a callback to do special logging

Or maybe you need to send information to a server or something

Bottom line is that this is code you might need for training, but is NOT part of the core research logic. This means you can keep your LightningModule class clean without polluting it with non-essential code.

Fit with DataLoaders

For some production cases, it’s useful to NOT define the dataloaders inside the LightningModule. Now Lightning supports passing in those callbacks to the .fit function.

Please note, however, that for research users SHOULD still define the dataloaders inside the LightningModule for improved readability and reproducibility.

New Loggers

Lightning doesn’t just support tensorboard. In this release we added support for more loggers. These loggers enable users to have different experiment tracking and visualizing capabilities.

Here are the current loggers we support.

Weights and Biases

Multiple loggers at once

Now you can also use multiple loggers at once

Thorough Documentation

The last release was a transition release for us in terms of documentation. Now, we’ve fully documented the codebase, with a ton of examples and mini-tutorials on how to use each feature.

You can find the documentation here.

Torchbearer Merger

During this release we also had the pleasure of welcoming the founders of Torchbearer (Ethan Harris, and Matthew Painter) to the Lightning core team where they’re bringing their experience developing DL frameworks to make Lightning even more powerful and flexible.

Welcome aboard!

How to get involved

The Lightning community has grown tremendously over the last months. Feel free to submit PRs when a feature isn’t supported or you have an idea for something new!

The core team is very active in helping new contributors land their PRs. However, we’re very selective about what we allow into the framework, so make sure to discuss it beforehand and arrive at a solution that works for the team!


An open source machine learning framework that accelerates…