Computational Photography Will Revolutionize Digital Imaging

Vincent T.
High-Definition Pro
7 min readOct 16, 2018

--

There is a paradigm shift taking place in the imaging industry with the introduction of computational photography. This field involves digital image capture and processing techniques that use digital computation or algorithms instead of standard optical processes. It is very unconventional and may seem to be at odds with traditional photography. However, I see it as complementing traditional photography. Computational photography introduces new features in imaging that bring visualization beyond what traditional conventional photography offers, and it takes it to another level. The application for it is in 3D imaging, AR (Augmented Reality), VR (Virtual Reality) and MR (Mixed Reality) environments which are bringing new ways of storytelling for filmmakers and allowing creatives to explore new ways of shooting video, developing video games and creating digital content. We are already seeing its applications on consumer devices, most noticeably on smartphones.

Some of the most popular implementations of computational photography is in smartphones. (Photo source Google)

Computational photography was first used as a phrase by Canadian inventor and engineer Steve Mann back in 1995 to describe his work in computer imaging. One of the pioneers who gets credit for the field of computational photography by giving it a broader meaning was from an electrical engineering and computer science professor at Stanford University named Marc Levoy. Levoy is also a distinguished engineer at Google, where he works in development of imaging systems like Google’s HDR+. According to Levoy, computational photography is “computational imaging techniques that enhance or extend the capabilities of digital photography [in which the] output is an ordinary photograph, but one that could not have been taken by a traditional camera.”

In computational photography, it is not just camera sensor and optics working together, it also involves capturing data and digital computing processes.

The camera obscura method has been the traditional way of capturing images. By focusing an optical device to a subject with good light, an image is created on medium like film. The creation of the image ends in the dark room after it has been developed on film. Then cameras became digital and captured images to storage devices. It contains more information about the image like metadata that can be used in post processing. The image creation ends after the digital image has been retouched or processed from it’s RAW file. In the digital age of photography, cameras are not just capturing an image but also data. The image creation process actually has so many possibilities because of that, there is no end to a final “perfect” image. Computational photography makes the best use of that data that has been captured from exposures to create the best images. In combination with AI techniques it can produce even better results that is open to many possibilities.

Amazing shots with AR features that a camera can decide changes engagement on social media platforms and entertainment. (Photo source Google)

Traditional photographers may not be on board for something that is far from traditional. A typical photographer creates images with a camera thru their lenses using artistic composition that is usually done correctly the first time it is shot on camera. Computational photography is basically the same, but it makes use of computing technology that may utilize AI machine learning techniques, advanced HDR and enhanced image processing to make the image look more stunning and visually appealing. It makes the photographer irrelevant in most cases since the process is intelligent enough to make the image look great even if it wasn’t shot by a professional. Using a Pixel or iPhone smartphone camera, an ordinary user can create stunning portraits with the help of computational photography.

One technique in computational photography I am quite familiar with and used in the past is HDR (High Dynamic Range) photography. This requires taking multiple exposures of the same scene and compositing them together to bring the details to life using software. There is also a technique of how to shoot HDR, while some DSLR cameras have HDR ready features available. This whole process can take about 20 minutes to an hour to perform by hand. However, with newer systems like in smartphones, it is all done in-camera without the user having to open an image editing software. The smartphone software is intelligent enough to know what to do and how to create the best HDR image from the exposures.

Example of how HDR can turn an ordinary shot even better. (Photo source Google)

Another application is in taking selfies using portrait lighting modes. Now there is plenty of AI involved with this, which can dramatically change the way a photo looks by taking multiple exposures and gathering the best details from each shot to make one final image. It does this by stacking multiple images with short exposure times (underexposed) and taking an average value of the pixels to create the final image. The short exposure times means there are no blown highlights so this allows the software to enhance the image by increasing the highlights to crush the shadows and show more detail. Other ways this can be applied is in photo stitching, de-mosaicing, object removal, hole filling, content aware fill among other features made possible by visioning software algorithms.

There are also actual cameras that do computational photography with light-field capabilities to alter focus points and depth of field after an image has been captured. A camera with light-field capabilities can capture 3D scene information which can then be used to produce 3D images, enhanced DOF and selective de-focusing. This does require specialized equipment and is more suitable for commercial visual effects. A sub-field of computational photography called epsilon photography does similar effects but does not require the specialized equipment. Instead it uses more software to analyze multiple images and photo stacking techniques to create the best image. The desired result is to get a composite image of higher quality, with richer colors, wider-field of view, more accurate depth map, less noise and higher resolution.

A Lytro light-field camera used in computational photography. (Photo Source Lytro)

There is a reason why the applications for computational photography are first appearing on consumer devices like smartphones. Smartphone vendors, though not traditional camera companies, have excellent software that make use of advanced AI techniques and capabilities that include image stabilization and digital image processing. Their smartphones also have more powerful processors than a conventional camera. In general a DSLR does not require as much computing power to create an image. It is different with smartphone cameras. What they capture is mostly data that is then processed as an image inside the camera using software techniques. Smartphone camera sensors work with another type of system to create the images and this requires the powerful processors installed. Consumer DSLR and mirrorless cameras are just beginning to incorporate these features, but it all started with smartphone cameras. In terms of image resolution though, DSLR and mirrorless cameras still have an edge because they have larger sensors.

Is computational photography on a head-on collision with traditional photography?

I want to share my thoughts on this, but I don’t want to sound biased. I do like computational photography, since it enables new applications. Traditional photography will not be threatened at all, because there will still always be a demand for photographers to shoot events, weddings, commercials and products. In no way is computational photography about to replace that, instead it is going to revolutionize photography as a medium to new heights. This can greatly work with photographers to create stunning imagery which can benefit them even more. Just imagine being able to present a new method of visualization, like 3D photos to customers, that creates value. You’re also not likely to see space agencies like NASA contract Ansel Adams style photographers to shoot the Martian surface. They will use a computational camera to get the best details for scientific and visual purposes. You won’t be able to get that detail using a film or conventional camera.

Computational photography is revolutionizing how we take photos. It won’t replace traditional photography, but can definitely enhance it for the best images. Those who adopt it to their workflow will be adding a new dimension to their imaging that is something that can deliver more value. This won’t become novelty, but rather it is fast becoming the norm until the next technology comes along. Right now, image creators can truly find different and more creative ways in using it.

Suggested Reading:

What Photographers Need To Know About Computational Photography:
https://pdnpulse.pdnonline.com/2017/10/photographers-need-know-computational-photography.html

Smartphone cameras are the new point-and-shoot
https://medium.com/hd-pro/the-smartphone-camera-the-new-point-and-shoot-6fc8701c8bb4

--

--

Vincent T.
High-Definition Pro

Blockchain, AI, DevOps, Cybersecurity, Software Development, Engineering, Photography, Technology