Large Format Digital Photography Through My “DigiTiler”

Sony A7R on DigiTiler with Nikkor 210mm at around f11 I believe

The above 761 megapixel photo was created by stitching together 40 images, which were captured by a Sony A7R through a large-format lens, the Nikkor 210mm f/5.6. My friend Simon and I had just gotten back from dinner, the light was fading fast, so I took this shot literally outside his door because it was as far as I wanted to carry the camera.

Each image “tile” is 36 megapixels. They overlap around 30% to enable software stitching. Here they are as they appear in Microsoft Image Composition software, before stitching.

I call these cameras, with my large-format pseudo digital backs, “DigiTilers”. The above photo was taken by this

DigiTiler Front

I have many versions, from manual to robotic.

Old version of DigiTiler back. It doesn’t look like that anymore, but I like this photo. Same principle of “tiling” a set of images from a 4x5 image plane

In this essay, I’m going to explain the technical benefits of large format photography, both film and using my pseudo digital back. I say pseudo, because I can’t capture the whole image at once, as with film. (There are no commercially available large format digital backs, though LargeSense is trying.)

The drawbacks to my approach are many. It takes around 30 to 40 seconds for me to take, say, 40 images. People move. Tree branches move. I can only shoot people if they (successfully) sit still for half a minute. I can only shoot things outdoors if the amount of sunlight is constant, no clouds moving over the sun to block sunlight during the seventh “tile” capture.

And the camera is large, bulky, bedeviled by (my) poor design and/or craftsmanship.

Here is one of my first efforts, to see if the general idea would work, taken in May of 2016.

Emma, my ever patient daughter, sitting for 45 seconds

After more than 2 years, I’ve decided to build digitilers for anyone who wants to pay me to do so. I have no expectations. For anyone with an interest in this approach to large format photography I’m writing this essay to explain…why do I spend so much time on this time-consuming and headache-rich endeavor?

Subjectively, I believe most photographers can see that special three- dimensional pop often achieved with large format cameras. However, there are some technical, objective explanations which should help anyone with doubts.

I have a Sony A7R, which, at 36 megapixels, is one of the highest resolution full-frame cameras one can buy. The Canon 5DR captures 50 megapixels. I’m going to stick with the A7R because that’s the camera I have. If I used it with my DigiTiler I produce around 600+ megapixel images.

My 4K monitor, which has plenty of resolution, has 3840 x 2160 pixels. Actually, it has 3-times that amount, or 11,520 x 6,480 little dots of light. There is a red, green and blue light diode for each “visible” pixel. That is, at the top-left, first pixel on the screen, there are three small dots: red, green and blue. My eye will combine them into a full color.

My TV, which I’ve shot with both a7R/55mm and A7R on DigiTiler with Nikkor 210/f5.6 large format lens
Close-up of my TV’s pixels taken with A7R on DigiTiler
As close-up as one can see the pixels when shot with the A7R and 55mm Zeiss.

A camera works in a similar way. Each pixels records a red, green or blue value and they are all combined at each pixel. The A7R has 7360 x 4912 pixels. As you can see, that’s very close to all the dots shown on my monitor.

There is an issue with digital cameras, however. Only the Sigma Foveon sensors can capture a full-color value at a single pixel location. Unfortunately, such sensors need a lot of light to function well. Therefore, most sensors have color filter arrays, single color filters over each pixel.

At the top/left of my camera the first pixel is blue. To create a full color it must borrow red and green values from pixels next to it. This is not theoretical. 66% of each pixel’s value, that one views or works with in a photo editor is an assumption that the color values next to it are the same. Most of the time they are! Or close enough. This kludge, used to create color images, works because we’re biologically not very sensitive to small changes in color resolution.

In order to get a completely accurate color at any pixel location we need to record the color at that location. To do that with our camera, we need to combine 4 pixels worth of color (theoretically 3 but sensors are made with double green pixels).

One of the difficulties in understanding this subject is that, like many things in society, the manufacturers want to sell the positives of their product, not the negatives. It will help if you forget everything you think you know about cameras, if you’re already resisting the truth about your camera ;)

The following is what a camera “sees” when it captures a color test chart.

640 x 360 RAW camera pixels

As you can see, your eye is blending in the colors so it looks somewhat natural.

Up close, here are the pixels at the center hinge, which I assure you is completely black.

Cool, huh! Why the bright red, green or blue in that hinge. What’s going on?

Let’s imagine there is a bright reflection on the black plastic which is only one pixel wide. Let’s further assume the pixel it hits on the sensor has a red filter over it. Keep in mind each pixel on a camera sensor has a color filter over it, alternating, red, green, red, green… etc. on even rows and green, blue, green, blue on odd rows (see graphic above).

Because the pixel is red it shows a light shade of red, not a spot of white. Why, because white would need an equal measure of green and blue and THOSE pixels did NOT get exposed to that single strong ray of light!

What you’re seeing in the image above is the incomplete color information at each pixel location creating false color calculations.

This doesn’t matter in practice because, as the first image proves, we don’t see at that resolution so our eye just blends these colors together.

But if we’re purists, we wouldn’t show color at the sensor pixel level. We would only show a pixel of color where we have full color information. Therefore, in this image, we would downsample the image so that each pixel has all 3 colors fully represented.

Each pixel has a an accurate reading of red, green and blue at that location

Here is the above hinge downsampled. False color gone!

However, we lose general, color-less resolution. One can call it black and white resolution, green resolution? What you need to understand is color resolution just isn’t that noticeable from far away.

Again, I notice it. Or rather, I notice the subtler beauty of photographs that have no color distortions.

The takeaway here is that if you want perfect color accuracy at each pixel you need to downsample any Bayer sensor camera by 4 to 1.

For my A7R, that means I end up with 3,680 x 2,456 pixels of full color information. That’s 9 megapixels.

That’s about the same resolution of my 4K display. But keep in mind, I can’t zoom into an image and see more. Of course, I could if I don’t downsample. But then there will be color artifacts as shown above. How much that bothers one is a matter of taste.

The printing of large photographs is where one can more easily notice the achilles heal of bayer sensor cameras

Photo quality prints are generally printed at 300 dpi, or 300 pixels per horizontal and vertical inch. Using our calculation above, to get full color pixels out of the A7R, that gives us 3,680/300 or 12 inches horizontal by 2,456/300 equals 8 inches vertical.

Please let that sink in!

Essentially, even with today’s most advanced cameras, once you print larger than an 8x10 print you need to double up on pixels. Again, not a problem for most people because people don’t notice more resolution even if you add it. That is, few people look at large prints up close.

But…I notice.

The resolution of the tree above is 27,601 x 24,313 pixels. Even stitched together, each image is from a bayer sensor so we need to downsample to remove all color distortions. That gives us 27,601/4 = 6,900, by 24,313/4 = 6,078 full-color resolution pixels

If we print that image we get 6,900/300 = 23, by 6,078/300 = 20 inches.

Whether one shoots film with a large format camera, or uses another approach as I have, they can get richer tonalities from more captured color.

Other people have built similar devices to the DigiTiler. There is also panorama photography that can achieve similar color resolution. I don’t know of any cameras similar to mine available.

You can zoom into some of my DigiTiler images, like my TV, at EasyZoom. You can learn more about the DigiTiler at Maxotics.com I have images on Flickr. Also images at Digitiler.com, most by Mark Wylie, which isn’t currently maintained.