Remastering TinType

Perfecting the artisanal selfie

An authentic TinType has extremely shallow depth of field and a very specific plane of focus on the eyes.

Ever since Apple introduced cameras that supported depth capture (way back with iPhone 7 Plus in 2016), I’ve been thinking about how we could utilize the technology at Hipstamatic to even further perfect our analog simulations. When iPhone X came out last year with the TrueDepth camera and support for taking Portrait Mode selfies, things began to crystalize around what sorts of things we could do in the world of self portraiture. Of course TinType came to mind, since it is one of the effects that over the years we’ve been particularly proud of. Since a big part of the TinType magic comes from extremely shallow depth of field, having new technology that lets us access the depth data in an image seemed like a perfect fit for some experimentation.

Into the depths

The haunting beauty of tintypes, daguerreotypes, ambrotypes, and the many other very early photographic techniques that are the inspiration for our TinType app is worth spending a little time dissecting.

“Erika” Ambrotype by Quinn Jacobson. May 2007, Viernheim, Germany.

Looking at this ambrotype the first thing you’ll notice is the very narrow depth of field (the range of the image that is in focus). Focus works like a two dimensional plane parallel to the lens capturing the image. You can actually see this plane in the image above: the woman’s eyes and face are in focus but her ears, just a short distance behind, have already dropped out of focus. Further down in the image you will notice that the bottle and her hands are also in focus. This is because they are the same distance away from the lens as her eyes. This is the 2D plane of focus.

With TinType v1.0 we had to simulate this as best we could. We used what I affectionately referred to as a “donut blur”, whereby we used face detection to find the eyes in a photo and create a mask shaped like a donut. This imaginary donut wrapped around the subject’s nose, which meant eyes and mouth were in focus but the tip of their nose as well as the ears would drop out. This effect was quite dramatic, however it was an imperfect simulation. A donut isn’t ideal for every face shape, and also doesn’t support having other objects (hands, props, chest, etc.) at the same focal distance to also be rendered in focus.

Donut blur magic circa 2012, using face and eye detection to blur everything but the eyes and mouth.

This is what we shipped in 2012. I think we faked it in a pretty clever way given the technological limitations of the time. But now 6 years later we can take TinType to the next level with an even more authentic simulation using real depth data and an actual plane of focus.

Real depth of field using the TrueDepth camera on iPhone X. Everything outside the 2D plane of focus is blurred.

This is a photo from the latest TinType update. The difference is subtle but striking: eyes and the full face are in focus, however the nose, brim of the hat, and background are fully blurred and out of focus. This is done using depth data from the new cameras, which tells us how far away from the lens every pixel in the image is.

(This technology also lets you change the focal point after capture, a new feature available when editing photos using the update.)


It might not be clear at first glance what makes these new TinTypes different, but these subtle details and enhancements available thanks to depth capture make all the difference when you put all the pieces together to create a TinType. It’s really fascinating and rewarding to think of ways to use new technology to make more authentically old things. Give the new TinType a shot and let me know what you think!

A few TinTypes of the makers. Clockwise from the upper left: Aravind, Mario, Lucas, and Ryan. Yes, we are actually quite cheery folks in real life. 😉

TinType v2.1 is available now on the App Store.