Photography’s Future is Computational

Robert Rittmuller
Sep 18, 2018 · 6 min read

The world of photography has experienced major changes in the transition to digital, and now it’s about to do so again with the transition to computational photography. Brace yourself!

For me, photography has always been so much more than just preserving memories. It is about creating something greater than ourselves, something that invokes strong emotions, and something that drives excitement. I’ve always had a strong belief that it is not about what you use to create, it is the artistic nature of creation itself that brings the magic. With film cameras, photography was slow, measured, and sometimes tedious in its very nature. The digital revolution has brought about a creative explosion in the world of photography. Gone is the tedium, gone is careful consideration, and in its place comes a wonderful new world of rapid experimentation. Instagram, Snap, Facebook, and Twitter have all enabled us to show our photographic art to the world in nearly unlimited ways and all in near-real-time. Throughout this digital transformation the fundamental basics of photography have steadfastly remained the same. We still bow to the aperture gods, we still covet thy fast glass, and sensor size still matters. But that’s all about to change thanks to something called machine learning.

…the next 10 years you will see a shift away from camera hardware being the focus to incredible software innovation…

The next generation of imaging technologies that will power tomorrow’s cameras won’t be driven by hardware, they will be driven almost entirely by software. Yes, software. I predict that over the next 10 years you will see a shift away from camera hardware being the focus to incredible software innovation using machine learning that creates never-before-seen capabilities. We are already seeing this now with rapid innovation in the world of smartphones that is changing the entire photographic world right under our noses with fancy terms such as; Smart HDR (Apple iPhone), HDR+ (Google Pixel), and many others. For those who crave the past, there are tons of apps that allow you to re-create the look of old lenses and film styles, all done completely in software. As Apple just announced, you can even adjust high quality depth-of-field after the photo has been taken. But this is just the beginning, we are only now starting to see hints at a photographic future, powered by machine learning, that will give us all the ability to create imagery beyond our wildest imaginations.

So let’s talk about some of these future innovations, some of which might be just around the corner and others that are solidly far into the future.

What if every lens was perfect?

Imagine a world where every lens you attach to your camera was optically perfect. No distortions, no vignetting, no flare (unless you wanted it). Sound impossible? Believe it or not, we are already on the path to using technologies that go far beyond the in-camera lens corrections most photographers are familiar with today. Major players are already ways to make these (and other) corrections even before you actually take the shot. But the real innovation will be when the use of machine learning allows hardware vendors to create models that can correct virtually any flaw in even the cheapest lens. And the best part is you won’t even notice it’s happening. All of this will be behind the scenes just like how smartphones from both Apple and Google are doing today where you worry about framing your shot while the software takes care of the details.

What if every camera knew what it was looking at?

We’ve had several forms of scene detection in cameras for a long time but recently the technology has been getting far less attention. The ability for a camera to accurately detect what is in the frame can be a powerful tool for both amateur and pro photographers but its usefulness to date as been limited due to poor accuracy. I think that’s about to change with some of the more recent advances in machine learning powered scene detection. In the past, most of the technologies used were simple and often prone to getting it wrong most of the time. I once had a camera that claimed to have automatic scene detection that was no more than an ambient light detector; more light must mean you are outside, less light meant indoors, etc. Through the power of machine learning it’s now much easier to train a neural network to perform scene classification. Fast and accurate scene classification opens the door to all kinds of additional scene optimizations and enhancements. Low light portrait? No problem, accurate detection enables the camera to perform normal optical adjustments to the lighting conditions while the more specific classification allows for greater post-capture adjustments of the raw data directly from the camera sensor. A process such as this would be able to use separate optimizations for a child blowing out a birthday candle than someone holding a candle in a dimly lit church.

What if every photo was free from sensor noise?

Currently, one of the defining features of large professional camera systems are sensors that feature extremely low image grain/noise even at high ISO sensitivities. These cameras cost significantly more than cameras that feature smaller sensors and the reason is clear, literally. Because larger sensors can capture more light they offer the ability to create images that are free of the grainy noise so common in small-sensor cameras such as those found in smartphones. But as anyone who has a newer Google or Apple smartphone can tell you, these small sensor cameras are getting better very rapidly. The level of sensor noise generated by today’s smartphones is no where near what users were seeing even two hardware generations ago. That said, things are about to change in a big way. Recent advances in machine learning noise reduction techniques are likely to be introduced into the camera industry, first in post-processing tools, and then ultimately integrated directly into various camera systems. These new systems will have the ability to virtually eliminate all sensor noise. The software techniques currently used to reduce noise in captured images are intended to be applied across a wide variety of systems and sensors. A good example is the noise reduction in the Photoshop Express app from Adobe, it works regardless of what hardware was used to take the photo. The downside of this approach is that the process does not always produce perfect results, in fact, the price you pay for using it is often a major loss of detail in your photo. In contrast, machine learning techniques can be applied in such a way to learn the specific pattern of noise that’s created by a single type of sensor. This approach will make it possible to train a neural network to perform built-in noise reduction for photos that essentially removes all the noise without losing any detail. Ultimately this would lead to photographers being able to use higher ISO sensitivities without having to deal with a loss in overall image quality due to sensor noise. In 10 years it’s entirely possible that ISO will not even be relevant for the majority of photographers and therefore might never be adjusted away from the “auto” setting.

What if your camera could help you take the perfect photo?

So consider this, a world where your camera knows more about photography than you could ever hope to understand, a world where your camera can adjust dynamically to whatever the lighting conditions, and this same camera can fix almost any image quality problems before you knew they even existed. Is this is the camera of the future? Perhaps, but I think in ten years the world will see a camera that can even tell you if the photo you are about take is any good or not. Seriously. Why not? Rumors are already out there that Apple is aggressively . Personally, I think the era of the “smart” camera is closer than we think!

…the transformation coming with computational photography is going to directly empower amateur and aspiring photographers…

The era of the computational camera has begun!

The transformation of the photography industry to digital was amazing for what it gave professional photographers. But I think the transformation coming with computational photography is going to directly empower amateur and aspiring photographers far more. The ability to concentrate on the artistic aspects of photography rather than the technical details will open doors for many while the inclusion of these technologies in cheaper camera systems will foster wide-spread adoption in the form of smartphones and other small camera systems. Professionals can look forward to insanely awesome post-processing tools in the near future while the allure of a major camera system with the features I have described is something that will likely come to fruition over the span of several years. Either way, we all can look forward to an amazing photographic future!

Connect with the Raven team on

Data Driven Investor

from confusion to clarity, not insanity

Robert Rittmuller

Written by

A devout technologist, I write about AI, cybersecurity, and my favorite topic, photography.

Data Driven Investor

from confusion to clarity, not insanity

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade