Shooting in 12K (In Photo Ursa Mini Pro 12K Camera — Blackmagic Design)

Why Shoot In 12K? — The Benefits Of High Resolution Imaging In Video

There are filmmakers who shoot high resolution constantly, at the highest available even though most displays don’t support it. A display that supports only 1080p Full HD will not be able to show true higher resolution 4K video, so it definitely won’t be able to show content shot in 12K. Why would a director shoot using a 12K camera for content that will be mostly viewed from lower resolution displays? There is a fundamental reason for this, and the results are truly stunning.

Note: Native 12K content cannot be viewed by lower resolution displays because it is too big and won’t fit on those screens. 12K contains more pixels in the images. Therefore it must be downsampled to support a lower resolution in order to fit for viewing.

The Benefits Of 12K

The obvious benefits to filmmakers when using a 12K or other high resolution camera (e.g. 4K, 6K, 8K, etc.) are the following:

  • Less noise in images due to bigger sensor size.
  • Captures more light for better quality images (ideal for low light filming).
  • Image quality looks more realistic and engaging to the audience.
  • Best for large displays greater than 30" and cinema screen size.
  • Finer details are captured.
  • Better bit depth color support.
  • Better for post production when it comes to editing.
Ursa Mini Pro 12K Camera (Source Blackmagic Design)

High Quality With Lower Resolution

The reason a director or cinematographer would use a high resolution camera (e.g. 12K camera) is due to higher quality when it comes to imaging. There is a lot more things editors can do with high resolution video than if the camera used was for a specific native format. It would be a waste of time and money to film a show using different format cameras just to support a specific display type. Imagine having to do three takes for a show just to have one version suitable for 720p HD, 1080p FHD and 4K 2160p UHD. Production budgets would sky rocket, not to mention filming and shooting time can take from days to weeks. Rather than doing that, it is best to use the highest resolution camera available to production to cover the various formats through the post process of downscaling.

You can still view a 12K film provided it has been appropriately downsampled so it can be viewed from a lower resolution display. Downscaling or downsampling allows editors to take a high resolution format and sample it down to a lower resolution without losing too much quality. While display that is suitable for 12K is the best for viewing, that particular resolution can be downsampled to 8K, 4K and 2K 1080p without degrading too much of the quality. It maintains enough information about the quality of the image at the sampled lower resolution without too much noise or degradation.

In downsampling, we are reducing the number of pixels from its native format of 12K, but still maintains that level of quality. 12K resolution is 12288 x 6480 for a total of 79,626,240 pixels (equivalent to 80 MP). If we downsampled to 4K resolution we are reducing the number of pixels to 3840 x 2160 or 8,294,400 pixels (equivalent to 8 MP). That is bringing the number of pixels down 10x. The sampling ratio is 3 pixels to 1 or 3:1. The good thing about this is that it still retains the quality of the video, though it involves other processes (e.g. compression, coloring, etc.) in post production as well. The reduction of the pixels is in order to support the resolution size of the 4K display.

Today, most displays support at minimum 720p HD. The content provider, whether it is YouTube or Netflix, must provide the available content that is supported by the user’s display. Supposed you want to stream a video on an HD display. Apps can detect the display type and adjust the content accordingly for the viewer. In this case it must lower resolution to HD since it doesn’t support 4K.

Chroma Subsampling

The main reason why downsampled formats still look better on low resolution displays has to do with chroma subsampling. According to this technique, the human visual system has a lower acuity for color differences than for luminance. This is because the human eye is more sensitive to variations in brightness rather than color.

In digital imaging, a pixel of information in an image is divided into 2 components: the luma (Y’) and two color difference components called the chroma (Cb, Cr). Cb is the blue and Cr denotes the red color component in chroma. Together this information is encoded in the pixel as YCbCr, which is an encoded color space derived from RGB. When downsampling, more information is retained for the luma component.

When viewers watch a downsampled video on their low resolution display, they can still see the quality retained even at a lower resolution. The photoreceptors in the human eye contain more rod cells (120 million), which are sensitive to brightness. If the pixel ratio for downsampling is 3:1 (12K to 4K), then the conversion only needs to retain the luma channel information from all three pixels and discard some of the chroma channel. You can extrapolate the value of 3 pixels into 1, when downsampling from 12K to 4K.

With chroma subsampling, 3 pixels is averaged into 1 by discarding some of the chroma information (Cb, Cr) but retaining the luma information Y’.

As a result you can downscale the highest resolution possible and viewers will not notice any changes to the image quality. The content will still look amazing, though it is best viewed in its native high resolution format.

Post Production Benefits

When it comes to post, there are plenty available for editors to get creative. They have much more to work with in terms of data stored in high resolution formats compared to a lower resolution. This is why when shooting in 4K and higher formats like 12K, the storage device on the camera must be suitable for storing the video. The storage media must also be fast enough to handle data write speeds when capturing footage with the camera. CFast and UHS-II are ideal for high speed and high resolution storage.

Grabbing still frames from 12K video clips allows editors to get high resolution photos. This is the same with 4K and other high resolution cameras. This is actually like a “hitting two birds with one stone” kind of deal. When shooting for a client you can present both video and stills from the production. The director won’t need a different camera just to shoot stills when it can be taken from the frames in the footage shot using just one camera.

When editors downsample, they are also increasing the sharpness somewhat due to higher pixel density in some lower resolution displays. This is measured in PPI (Pixels Per Inch). The more dense the pixels are (closer they are together) the more crisp and sharper the image is. Due to higher pixel density, curved lines in images look smoother exhibiting more depth. This gives images shot at 12K and downsampled to HD or FHD a stunning look since it retains that level of quality with the added depth. That is what makes the content pop out for viewers.

When it comes to color grading, you have more data to work with in 12K. This is because with more pixels, our eyes can determine subtle color and gradient shifts in images. With support for color spaces like REC2020, the colorist has a wide gamut of colors to choose from to bring out the scenes in the video. This brings more lifelike and realistic looking images with rich colors that are best seen natively on 12K displays, but still look great when downsampled.

High resolution gives more for editors in post to work with.

Editors who don’t have native 12K displays can still edit them. This is through the use of video editing software like Premiere Pro and Final Cut Pro using rendering proxies. The software can take enormous 12K material and convert them to lower resolution versions that the editor can work with using let us say a 5K display. When the editor has finished, they can export the content back to 12K for distribution.

Higher resolution video also allows better zooming, panning and stabilization without getting too much aliasing or artifacting when framing images. An editor can zoom in an x number of times to get a closer view of an image but still retain its quality. If the director requests a close up to zoom in on an actor’s face, the editor can frame that easily without worrying too much about the staircase effect that is common with low resolution images. Images can also be stabilized much better due to more tracking information available to editing software.

There are other examples, but these are the most obvious in post digital workflow environments. Cost is another consideration, so there must be a budget open in order to support the use of higher resolution cameras. It requires more storage space for data, more skill level for editors and colorists, higher insurance costs for equipment like the camera and upgrading software features.

Online Compression

There is an interesting article about how online compression can still affect quality for downsampled content. If you are a content creator filming with a high resolution camera (in the article in mentions 4K camera) and downsample to 1080p, uploading the resulting content to services like YouTube will not preserve its quality. This is due to compression. When the codec compresses the video for YouTube’s content delivery service, the quality is lost in sharpness and detail.

That is if you look at the image extremely up close, in which case you would have to watch the content at 400x zoom like in the article. Otherwise, for average viewers there is no way of telling how compression has affected the image quality. Compressing the content makes less difference in quality between a downsampled 4K video and native 1080p video. Even if that is the case, it doesn’t mean to you should only use a 1080p camera for YouTube content. There have been complaints about how YouTube reduces quality for any type of video uploaded to their platform but this is necessary in order to conserve bandwidth.

YouTube delivers the streaming content in real time and if there was no compression bottlenecks can quickly arise. Until there is more update to the codec situation with YouTube, it is still best to downsample because there is more creativity to work with and in most cases (even when compressed), the content is still much better to most viewers.

At 400x zoom, the image on the left is the original downsampled 4K content. When uploaded to YouTube the result is much different due to compression. (Source Premiumbeat)

According to the article:

“Although, given how YouTube is continually developing, perhaps in a year or two this article will become redundant.”

There is always hope for better things to come as the technology develops over time.


If it is quality (e.g. color, tone, depth, contrast) and flexibility that the content creator is looking for, using higher resolution with a 12K camera is definitely the best idea for imaging. The reason many filmmakers prefer shooting in higher resolution has to do with quality and creativity. Why continue to shoot with a 1080p camera at FHD when you can do much better with a higher resolution camera? This is not to say that every filmmaker should immediately replace their cameras and go 4K or 12K. There is still the costs to consider when using high resolution cameras because they are more expensive to purchase. Serious filmmakers (commercial or cinematic) probably should because there are plenty of more benefits and advantages to shooting with a 12K camera (4K or other higher resolution camera than 1080p) compared to a 1080p camera.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store