iPhone 7 Plus: Software is Eating the (Camera) World

Côme Courteault
fStops
Published in
8 min readSep 13, 2016

Except if you’ve spent the last few days in a dark cave without television nor an Internet connection, you’ve probably heard of the iPhone 7. While the attention has been vastly directed towards the removal of the 3.5mm jack, the true evolution might very well be the camera, especially the dual-lens camera of the iPhone 7 Plus.

Yet a phone with 2 rear cameras is nothing like a first. The first one has been the HTC M8, featuring a 4MP sensor and a secondary, low-megapixel count sensor, to calculate depth of field (more on that later). More recently, the LG G5 also featured a dual lens system, followed by the Huawei P9. But Apple’s implementation is — as often — more cleverly thought out.

A dual lens system can be used to do two things: 1) optical zoom and 2) shallow depth-of-field. Allow me to dig into these two points:

Optical zoom: on the importance of focal length

Like the LG G5, you can zoom into the picture optically rather than digitally. Which means that, unlike a traditional smartphone zoom, you won’t lose definition and the picture will be still crisp and beautiful. This means it will still be a 12 megapixels image when you zoom in, instead of being reduced to a mere 6MP. Unlike the LG G5 though, the two lenses are much, much more interesting.

LG dual lens system, courtesy of DPreview

LG chose a very wide angle (135°, something like a 10mm in full-frame equivalence) and a wide angle (78°, like a 28mm in full-frame terms). It produces some nice marketing material like “broaden your vision” but it’s pretty useless in everyday life, except if having your feet in the frame every time you take a picture is matter of life and death.

Apple’s dual lens system, courtesy of Apple and the screenshot app of my Mac

Apple, on the contrary, elected to stick with a traditional 78° (28mm eq.) lens, seconded by a 45° (56mm eq.) lens. Unlike the LG solution, it allows you to zoom farther into the image without any quality loss, something that is far more useful for about 99% of human beings populating this planet.

The only problem with Apple’s 56mm is the aperture. The aperture defines the quantity of light that goes through the lens and is noted in f/stops: the smaller the stop, the brighter the lens. Apple’s lens drops from f/1.8 (at the widest setting) to f/2.8 (2.33x times darker). According to TechCrunch, the iPhone mix the output of the brighter lens into the darker one when using the telephoto setting, which should compensate for the darker aperture setting:

At any rate, this longer lens is a welcome addition to the iPhone camera and is currently the only one featuring this practicality on the market (by the way, the choice of the two focal lengths is not entirely an Apple invention: for years photographers having walking around with a 28mm camera on one shoulder and a 50mm on the other).

Shallow depth-of-field: a software story

The other thing permitted by a two-lens system is shallow depth-of-field. This is what give you this “pro look” on images, with a beautifully blurred background / foreground while the subject stays in focus:

A picture taken in Paris with the heavy Voigtlander 25mm f/0.95

This effect is usually hard to achieve because it requires a wide aperture and a relatively long focal and being reasonably close to the subject. Basically, you have different sensor sizes in cameras: from an iPhone’s 4.8 x 3.6mm to a full-frame DSLR’s 36 x 24mm. The larger the sensor, the better image quality (the difference is particularly noticeable when it’s dark) and the longer the focal length for a given angle of view. Hence, because shallow depth-of-field required long focal length, it was reserved to DSLR’s and hybrids large sensors. If you want the full explanation and deprive yourself of sleep for the next 24 hours, you should read this:

But that was until now, because Apple says it can do the same with a small, iPhone-class, sensor. To be fair, they’re not the first ones to say so. HTC said so when they presented the One M8 and Huawei said so when they introduced the P9. All solutions rely onto the same hypothesis: because you have two lenses that are separated by a few millimetres, they act essentially like the human eyes and they can “see” in 3D (if you want the full detail, Wikiepedia can help once again).

Because the phone can “see” in 3D, it can produce what Apple pompously calls a “depth map”: a visualisation of the image over different levels of depth. Therefore, you can choose to put one part of the image in focus while knowing how much blur you should apply to the other parts (the farther it is from the focus point, the blurrier it gets).

The quality of the background blur is called bokeh (pronounced boka by Sir Phil Schiller himself, so I assume this is how we say it). Anyway, that depth map should play a big role in this bokeh thing. While traditional software-generated shallow depth-of-field effects result in a rather flat, artificial blur (because you don’t have these different levels of depth), the dual lens + software implementation should produce much more natural results.

But even if we take the most recent iteration of this technology, aka the Huawei P9, it doesn’t look great. Blurred areas seem artificial. See the example below: the white “pop out” and don’t have any detail. There is no progressivity in the background blur, white spots are just computer-blurred (you don’t see the dots anymore) and the list could go on.

Huawei P9 on the left and Leica Q on the right. Image courtesy of DigitalRev.

Of course, to the untrained eye, it doesn’t matter but still, any photographer can tell you it does make a difference and, in the end, the human eye can detect something is wrong. Can Apple change that? Well, we only have Apple’s samples to judge, but they certainly look great. Nevertheless it’s hard to reach a conclusion since they obviously have been taken into the best conditions you can possibly imagine (what, you don’t always carry a big studio flash around?).

Apple’s bokeh sample

Here, the white spots render much better and it’s possible that Apple built in a much better engine to harness all the information contained in the “depth map”. What’s more, we’re given to understand that Apple will use machine learning to make bokeh more pleasing. Hence, it’s totally possible the software will take care of any specificity in the background to make it as pleasing as possible. If such is the case, the iPhone 7 Plus might be the first smartphone that can achieve a truly beautiful bokeh.

Is the iPhone 7 Plus the best camera phone?

Probably. With a dual lens system, fast f/1.8 aperture and tweaked image processor, chances are it will be one of the best camera phones out there. Of course, there are still some weird things like the Panasonic CM1 (which features a sensor almost 6.5x the size of the iPhone’s) but they’re big and not really smartphones (more cameras with a phone functionality).

Is the iPhone 7 Plus the best point and shoot camera?

You should definitely read this Reddit thread to learn more about the iPhone 7’s sensor(s)!

This is a much more debatable question. If we compare the new iPhone with entry level compact cameras, it has almost all it takes to beat them. The sensor is only a tad smaller (around 17.3 sq. mm vs. 28.5 sq. mm). It’s important because the smaller the sensor, the noisier (lower quality) it gets in dark light. At these levels of difference is negligible, but let’s say the best compact will be somewhere 1.5x better in low light.

Yet this is vastly compensated by the iPhone’s much brighter (remember, it’s f/1.8–2.8 whereas most compacts are f/3.3–5.6, which means Apple’s lens is something like two times brighter). The only advantage left to the compact camera might be that its optical zoom can zoom much farther. The iPhone can too, but at the expense of a quality loss (it basically crops the image).

What’s more, with RAW capability and a powerful image processor (Apple clearly insisted on the image chip during the conference and previous generation already made an impressive use of their modest lens/sensor combination), it could totally be better than any point and shoot software.

Is the iPhone 7 plus a possible replacement for a DSLR/Hybrid/Large-sensor compact?

Here’s a more complicated question. Apple obviously thinks the answer is yes. To be fair, the ability to generate a decent background blur is indeed a great step forward, provided Apple has better algorithms than the competition. It may seem like a detail but remember the way we share photographs have changed. On Instagram, an image’s resolution (level of detail) is far less important than color and other distinctive features like background blur.

In this light, the iPhone 7 Plus is a tremendous advance. To many people, it will be perfectly able to replace a DSLR / large-sensor machine. If you just look at the samples recently published, they all look rather promising. Essentially, the limiting factor for about 99.99999% of the population will be the photographer’s talent and not his or her equipment ;-)

Software is Eating the (Camera) World

With the iPhone 7 Plus, Apple is clearly taking steps towards a new era in the smartphone camera, and even in the camera industry altogether. It shows that, more than merely hardware, what makes great pictures is more and more a good software.

Marc Andreessen said it a few years ago: software is eating the world. During years, the photography world thought it wouldn’t apply to cameras, essentially because it was impossible to bypass the laws of physics. What the iPhone 7 could be, if not a direct replacement for traditional cameras, is the start of new era where software can eat the camera world too.

--

--

Côme Courteault
fStops
Editor for

Head of Distribution at citizenplane.com and advisor at growthroom.co. Interested in the travel industry, blockchain and entrepreneurs.