Stress-testing iPhone 7 Plus’ Portrait Mode
As soon as the 7 Plus was announced with its dual lenses and depth effect (Portrait Mode), I knew I was going to have to upgrade from my normal-sized iPhone to the big guy. I love taking photos, and I own a dSLR, but I, like many others, find myself taking all of my photos on my phone. “The best camera is the one you have with you,” etc. etc.
Luckily, a week or so after the 7 was released, I had a trip to Japan planned. This was most definitely a great place to test out what the phone had to offer. There are many more photos where this came from, but I will say I edit all of them using VSCO.
And now that 10.1 is public, I thought I’d showcase a bunch of photos I took using Portrait Mode, and point out its pros and cons.
Human (and Humanlike) Subjects
This is what Apple advertises and, sure enough, does the best at. Faces are crisp, background are blurred—the whole dSLR-like effect. The one thing it can snag on occasionally is hair wisps. That’s pretty understandable—anybody who’s tried to isolate a person in Photoshop or similar knows that hair is the biggest pain in the ass. Portrait Mode thinks the same way. It either skips it (and the elements behind the hair are sharp too, no matter the depth) or overdoes it (and blurs the wisps altogether). Sometimes it works perfectly though:
Non-human Subjects
More hit than miss, I’m continually impressed by how the phone decides to choose what to focus on and what not to in Portrait Mode.
Handling Different Depths
I’m sure there’s a word to describe what I’m calling “stepping”—where different depths of focus are clearly defined with more or less bokeh depending on how far from the focal point the photo is—but the 7 deals with this surprisingly well. The photos below show it:
Problems
While a lot is super good, there are a couple areas where the Portrait Mode just ain’t gonna cut it. Complicated foreground and background combinations seemingly overwhelm it and the blur edges get confused throughout the photo.
Portrait Mode doesn’t even begin to successfully work with translucent or shiny objects. Though this is pretty understandable considering the hardware — attempting to detect depth on objects like these is probably difficult. Not sure if Apple will ever be able to get over this one unless their machine learning becomes near-perfect.
In Conclusion
Portrait Mode is definitely not perfect! Narrow subjects in the foreground (think straws, tips of dogs’ ears, and individual hairs) are Portrait Mode’s greatest difficulty. Here’s my guess as to why this is:
First off, I’m assuming that the way that the iPhone detects depth is by using stereo photo analysis. Not unlike how VR works, but in reverse, or how Pathfinder’s cameras work.
Then, with this in mind, thin objects near the camera have the same effect as when you bring a finger very close to your eyes — you see double and can’t really make it out well.
Just a poorly-educated guess.
But, on the flip side, the effect is applied quickly, and seems to focus on exactly the plane I want it at. Plus, its “stepping” between objects in the foreground and background is really impressive.
People have asked me what camera I used to take these photos, which is I’d call a good sign. I like how they look, and this is a camera I can keep in my pocket all day as I walk all over a beautiful country. That’s a win for me.
(See a bunch of other photos I took using the iPhone 7 Plus on Exposure)
Note: These are taken using 10.1 beta’s version of Portrait Mode. I haven’t messed around with the public version yet (but the feature is still considered a beta).
You can also follow me on Twitter.