Member-only story
A Very Basic Overview of Neural Radiance Fields (NeRF)
Can they one day replace photos?
The deep learning era began through the advancements it brought in traditional 2D image-recognition tasks such as classifications, detections, and instance segmentations. As the techniques matured, the research in deep-learning-based computer vision has been shifted towards fundamental 3D computer vision problems — one of the most notable being synthesising new views of an object and reconstructing the 3D shape of it from images. Many approaches tackled this as a conventional machine learning problem, where the goal becomes to learn a system to “inflate” 3D geometry out of images after a finite set of training iterations. Recently, however, a completely new direction, namely Neural Radiance Fields (NeRF), has been introduced. This article dives into the basic concepts of the originally proposed NeRF as well as several of its extensions in recent years.
Representing the Geometry Implicitly
The biggest difference between a NeRF model and traditional neural networks for 3D reconstruction is that NeRF is an instance-specific implicit representation of an object.
In simple words, given a set of images capturing the same object from multiple angles along with their corresponding poses, the network learns…

