Rendering Volumes and Implicit Shapes in PyTorch3D
Authors: David Novotny, Roman Shapovalov, Nikhila Ravi, Shubham Goel, Georgia Gkioxari, Justin Johnson, Jeremy Reizenstein, Patrick Labatut, Wan-Yen Lo
Intro
PyTorch3D is a highly modular and optimized library with unique capabilities designed to facilitate 3D deep learning with PyTorch. PyTorch3D provides a set of frequently used 3D operators and loss functions for 3D data that are fast and differentiable, as well as a modular differentiable rendering API. Researchers can use these features in deep learning systems right away. (For a primer on PyTorch3D and differentiable rendering have a look at our tutorial at the PyTorch hackathon).
Implicit Shape Rendering
Today we are releasing a suite of new features to support implicit shape rendering. In the past year there’s been an explosion in the number of papers and projects that employ neural rendering. This exciting research direction focuses on generating realistic renderings of 3D scenes from novel viewpoints based on input scene images. The core idea — reconstructing the implicit representation of surfaces in a 3D scene with a neural network coupled with differentiable rendering — enables learning the geometry of the 3D scene only from 2D views.
Efficient research in this direction requires several key engineering components including an abstraction for volume data as well as a differentiable implicit shape renderer. To enable flexible experimentation in this nascent research area, we provide a modular and extensible API. We have identified the key reusable components and provide well documented and tested implementations of them.
The new features we have added to PyTorch3D include:
- a Volumes data structure to support batching of 3D volumes and conversion between coordinate frames
- multiple ray sampling implementations (GridRaysampler, MonteCarloRaysampler, NDCGridRaysampler)
- several raymarcher implementations (AbsorptionOnlyRaymarcher, EmissionAbsorptionRaymarcher)
- ImplicitRenderer and VolumeRenderer APIs which compose a Raysampler and a Raymarcher
- several utility functions such as differentiable conversion of point clouds to volumes
Getting Started
To showcase the new components, we are including a modular, well documented reimplementation of NeRF using these components. Our re-implementation runs faster than the official release, while matching the quality of the output images. Below is a sample of a scene with many shapes and complex reflections generated with the PyTorch3D-based implementation of NeRF.:
This code can be used as a starting point for any research project in novel view synthesis. It will live in the new `projects` folder in the PyTorch3D repo where we will continue to add more examples of projects and paper implementations using PyTorch3D features.
To learn more about these techniques, see our new Colab tutorials that will take you step by step through examples of fitting a textured volume a simple NeRF model.
Conclusion
The goal of PyTorch3D is to drive progress at the intersection of deep learning and 3D by equipping researchers and engineers with a toolkit to implement cutting-edge research with complex 3D data. We commit to continuously improve and expand on the operators in PyTorch3D and welcome contributions from the community.
For a more detailed explanation of the new PyTorch3D tools for rendering implicit shapes and volumetric voxel grids, check out this below video tutorial: