The cameras of tomorrow? The Lytro Illum & Light 16

Dawn of a New Age

Roman M France
6 min readOct 23, 2015

It’s been a long time coming. The world has changed radically since the early 1820s when the first camera was invented. Photography, however, hasn’t. Photography has been limited by our infatuation with recreating the world as we see it. Traditional camera systems mimic the behavior of the human eye, and for the most part, we have mastered this approximation. It is time to look beyond our vision and imagine something different. It is time for the digital revolution we were promised.

Digital photography changed the industry more than it did the art form—it cut down the learning curve and finances required to become a photographer. Digital opened the art form up to individuals from lower class levels. While it was costly in the beginning — Nikon’s first DSLR, the D1 retailed for $5,850 in 1999, which equates to about $8,355 in 2015 dollars — the price to get in to photography quickly dropped. With the concurrent rise of the Internet, specifically sites like YouTube & Lynda, learning the art became easier. People could grab a camera, watch a couple of tutorials, and rush out and begin experimenting, which is the best way to learn anyway. Art schools became 4-year long networking meet ups. Fast forward a few more years and everyone has a smartphone. The iPhone is the most popular camera in the world. Smartphones gave digital its second wind, and combined with services like Instagram sharing images and discovering photographers is easier than ever.

Sean O’Kane of The Verge recently went to check out the L16

Computational Photography will be a radical change of the art form with the potential to make early digital dissenters look like prophets. Many professional photographers decried the shift to digital. In Hollywood, many directors still praise film and would happily choose it over digital platforms. In my eyes, digital strengthened the art form. It blurred lines here and there — many of the images you find on sites like 500px are more digital illustrations than photography— but it shortened the path to making great images. The learning curve with film weened out the pretenders, but people who came to digital serious about the art were able to obtain mastery faster, technically speaking at least. With computational photography, the focus of the art is being pushed from pre-capture to post-capture. Focal length, focus point, depth of field, even lighting, when it comes to Programmable Illumination, can all be changed after the shutter sounds.

Beyond The Eye

We’ve already seen examples of this with the Lytro Illum, a plenoptic camera (light-field) that uses a microlens array atop the sensor to gather information about the direction of the light rays entering the lens. The camera then uses algorithms to recreate the scene. The Light 16 is a little different, but comes with similar advantages. The L16 is a folded-optics, multi-aperture camera. It features 16 tiny camera modules to deliver comparable image quality to much larger camera systems. At any given time, the L16 is firing up to 10 of its 16 camera modules to capture an image. There are 5 35mm modules, 5 70mm modules, and 6 150mm modules. This allows a photographer to play with perspective in post and create the parallax result above. The L16 also allows for focusing in post and they claim to be able to recreate the thin depth of field you’d get with an F/1.2 lens—we’ll see about that. Computational systems, theoretically, allow for improved lowlight capabilities. In the case of the L16, you have multiple modules firing simultaneously at different exposures to deliver remarkable dynamic range. Again, this is theoretical because the camera doesn’t properly exist yet.

These cameras aren’t even the craziest computational systems out there. Femto-Photography or “Time-of-Flight Imaging” uses light reflection from photons scattered using LASER BEAMS to see around corners. THE CAMERA SEES AROUND CORNERS. It’s the size of a small child, but these are early days.

Ramesh Raskar, one of the scientist leading the charge on computational photography, speaks of all sorts of interesting systems in his CIC18 conference talk. Technologies like “Agile Spectrum Imaging,” a system where a sensor can detect the frequency of the light rays entering the camera and shift alter the color gamut spectrum on the go. Modern cameras use fixed color filter setups like CCDs or Bayer CMOS patterns, which are extremely limited in comparison to what Ramesh describes. Ankit Mohan, the genius behind this concept, now works at Canon.

Post is Life

Photography & Animation wrapped in one

From someone who came up on the old way of doing things this sounds like a nightmare. I try to spend as little time in post as possible. I build my presets and actions and when it comes time to import my images I make selects and apply the looks I like. Computational photography systems want me stuck to my desk like a film editor. I’m trapped in their proprietary software systems, which are painfully inferior to software like Capture One, having to finish my images. In an ideal world these actions would be implemented into the cameras. The L16 claims to allow users to make quick focus and FOV adjustments right on the 5" touchscreen, which is awesome. Because these cameras are more light-sculpting supercomputers than they are cameras, software is going to be the thing that makes or breaks this movement. The companies that nail it early will be the Canon and Nikon of the new wave. I don’t know what this means for Sony since they can’t even get menus right. (I’m sorry, I couldn’t help myself.)

Designed for Screens

Our current crop of images are designed to shine on paper—we love seeing our images on magazine covers or printed on large canvases inside galleries. It’s what we’ve known our whole lives. However, the world is shifting from paper to screens and our images seem incomplete in their new homes. We know that tablets and touch screens are capable of a new level of interaction but our images don’t make use of that potential. Computational cameras change all of that. These are the first images that feel like they were designed to make the most of modern viewing platforms. They are dynamic and surprising. They react to viewer’s touch. They entice viewers to be more curious. The images are active, which almost makes computational photography’s relationship with standard photography akin to VR’s relationship with 2D video.

Here come the Naysayers…

“No one respects the craft, everything is ‘fix it in post!’ Photography is dead!”

What are you guys so afraid of? Competition from a crowded playing field? If the only thing keeping you at the top of the heap is exclusivity, you don’t deserve to be there. Digital didn’t ruin the art and neither will computational camera systems. It will change aspects of it and open it up to more would-be artists. Working professionals often worry about undercutting, and that is a real issue, but most of that stems from veterans’ unwillingness to share information with newcomers. Everyone is so secretive about their deals. No one explains buyout fees and usage rights to newcomers. No one knows that there are photographers out here making $50K+ a day for their images and the rights to use them. No one tells you where the ceiling is. You spend your first few years in the industry wandering around a room blind-folded whilst everyone sitting at the table shouts at you for being there. Sharing is caring, guys. Let’s all band together and leap towards the future. The art form would be better off for it.

Sources & Further Reading:

The Light 16

Columbia University CAVE Laboratories http://www1.cs.columbia.edu/CAVE/projects/what_is/

Camera Corner + MIT Media Labs: http://web.mit.edu/~velten/www/corner/

Ramesh Raskar CIC18 talk: https://www.youtube.com/watch?v=ml4l_Xxa64s

Stu Maschwitz: http://prolost.com/blog/lightvisit

I’ve also assembled a Dropbox folder with journals and audio lectures from the top universities in the world on the topic of Computational Imaging: https://www.dropbox.com/sh/na8bfqof6grup2h/AACmKlNOPtxHh1ElUVBQHoDJa?dl=0

--

--

Roman M France

Professional Video Game Guy currently building teams at Ripple Effect, a Battlefield studio.