A fairly well known issue with the Fuji X-Trans sensor is “wormies” — sharpening artifacts that don’t seem to show up with Bayer sensors. I have a theory about why this is the case, but first I need to explain two things — Microcontrast and how the X-Trans differs from Bayer.
Click here to read a more detailed explanation of Sharpness vs Microcontrast (it’s a short read).
The super short (and imprecise) version is this — Sharpness is the ability to discern fine details — to count the number of eyelashes on someone’s face in a photo, but because they’ll average out to grey, there isn’t a lot of contrast between those individual pixels.
Microcontrast is the ability for two adjascent pixels to have very different tonal values.
An image can be sharp without microcontrast, and can have microcontrast without sharpness.
Color Filter Arrays and the X-Trans Sensor
A sensor is a series of photon counters. At any given pixel location (sometimes called sensel), the sensor simply adds adds up all the light that reaches that location.
For a sensor to see color, it must have a Color Filter Array (CFA) — literally a colored filter that sits atop each pixel so that any location only counts how much light form one color reaches that location.
If the filter sitting atop one pixel is green, it can only count the green light reaching that pixel. To “see” how much red or blue information it should have, it takes an average from the neighboring green or red pixels.
The most common type of CFA is Bayer. An arrangement where every other pixel is green with red and blue pixels interspersed evenly in between. Why so much green? Because the eye is more sensitive to green than to any other color on the visible spectrum.
For this reason, green is often used to assign luminance values — how bright a given section should be. To figure out how colorful — red or blue — it looks to neighboring values.
Fuji’s X-Trans Color Filter Array rearranges this pattern. Notably, it has 2x2 grids of green pixels surrounded by an outer 4x4 grid of red and blue pixels.
When the X-Trans launched, the discussion was how this layout was more “random” and therefore more film like, but I see something else.
What may be the most significant change here is that there are now 2x2 sections of green pixels. Every green pixel is now only 1 or 2 pixels away from another green pixel. From a burry “average” we’re now at a precise “adjacent”.
This means that the X-Trans sensor can pick up greater microcontrast than a Bayer sensor.
A Bayer sensor has to go two pixels over to find the next green pixel, but an X-Trans sensors has more green pixels right next to each other.
This gives images with the X-Trans sensor greater microcontrast (but not necessarily more sharpness) than equivalent Bayer image sensors.
This isn’t perfect — X-Trans sensors have other issues — red and blue pixels are further from each other, so rapid color changes are more difficult to detect — but that’s not the topic of this article. For the purposes of this article — X-Trans sensors are very good at microcontrast. Bayer sensors are not.
Now that we know what Microcontrast is and why the X-Trans sensor has more microcontrast, let’s look at why sharpening algorithms may prefer Bayer over X-Trans.
I’m far from an expert here, so if anyone knows this stuff better than me, drop me a line.
When you’re in Adobe, there are basically two different sharpening algorithms. One tries to increase absolute sharpness — ability to resolve find details. The other (called “Detail”) seeks to improve microcontrast.
The “Sharpen” algorithm is looking for patterns — what shapes exist beneath the pixels — so they’re looking over a wide range of pixels to figure out what the underlying shapes are.
This will act basically the same on Bayer and X-Trans sensors — they’re looking for patterns among a wide range of pixels — all of the pixels that make up an individual eyelash.
The “Detail” algorithm is looking for microcontrast. What adjacent pixels have high contrast from each other? Another term for this is “edge detection” — if two pixels have high contrast, the algorithm assumes it’s an edge and will increase that contrast.
This is what causes “wormies” — Fuji images have greater microcontrast, which confuses the “edge detection” algorithms into thinking there are a lot more edges than there are.
For Bayer images, it’s easy to assume that no two adjacent pixels will have have strongly different luminance values (microcontrast), but that goes out the window with X-Trans sensors.
Let’s take for example this rather boring Bayer image.
This image may have “sharpness” — but if we zoom in on the raw sensor data we can see that it lacks microcontrast. No two adjacent pixels are the same color — so no two pixels can be relied upon to have different tonal values.
Note: This was taken with a Sony A7, which has an AA filter, and with a vintage lens that may not have the best sharpness — but it can still stand in as an example for our purposes.
You can’t help but look at just the green pixels in the above image. The red and blue pixels almost read like noise. It’s easy to see, though that because the green pixels are spread out, you won’t have two adjacent pixels with a high tonal contrast.
An algorithm would be right to assume that if one pixel was very different from a neighboring pixel, a lot of things (red and green and blue) would have to have changed — so it’s probably an edge.
Now let’s compare this with a Fuji X-Trans image.
Zooming in on the actual sensor data from this image you see that there are more areas where adjacent pixels have more tonal variety — two pixels that are next to each other where one is brighter than the other.
X-Trans sensors are better at microcontrast because they have more green pixels clustered together.
Looking at just the green pixels, it’s easy to see that there are often two pixels that are next to each other that can have different tonal values. When the Fuji image is turned into a full color image, adjacent pixels may have different tonal values — microcontrast.
If an algorithm is designed to increase “detail” (microcontrast) and is designed for the smooth output of a Bayer sensor that has relatively low microcontrast, it seems logical to me that it might go a little bit haywire when applied to a sensor that already has strong microcontrast.
It will start to see edges everywhere — which is the source of the “wormies” in when you try to sharpen the Fuji X-Trans sensor.
Put simply — sharpening algorithms are designed for the low microcontrast “blur” of a Bayer sensor. When confronted with a high-mintrocontrast sensor such as a Fuji X-Trans, these sensors start to see “edges” everywhere.
The “wormies” that are associated with X-Trans aren’t a “fault” of the X-Trans sensor, they’re baked into the assumptions that sharpening algorithms have about what the underlying image should look like.
Fuji — who has been innovating in sensor technology for decades — is sometimes blamed for the fact that the default algorithms don’t play nicely with their sensors.
Fuji users should understand that their sensors have greater than normal microcontrast….. And therefore should not need as much “detail” (microcontrast) boost in post-processing as Bayer sensors.
When edge-detection is desired, using algorithms that do edge detection without relying too heavily on micro-contrast would produce better results — e.g. algorithms that will try to detect edges across multiple pixels and not just adjacent pixels
And when those algorithms break down — understand that it’s neither a conspiracy against Fuji, nor a problem created by Fuji — it’s simply a difference in assumptions about what a starting image that requires sharpening will look like.
Addendum (Feb 2019)
It goes looking for shapes that aren’t there. Here’s a boring photo of some fabric. The outer (darker brown) is from a Sigma Quattro sensor.
The inner (lighter brown) is from a Fuji X-Trans. Both straight out of camera.
You can tell from the outer image that this is of nicely ordered fabric — lots of little loops. The Fuji version of the image, however — is less ordered. It’s of random shapes.
The Fuji algorithm goes looking for shapes that don’t exist.
I’ve converted X-Trans files through Raw Therapee — and those files don’t exhibit this weird “pattern finding” “wormy” tendency — so I know it’s an artifact of Fuji’s demosaicing algorithms.
In my other blog post I say that — Fuji’s sharpening algorithms seem to be more “edge detection” algorithms — they’re looking for shapes & then optimizing for those shapes. Rather than looking at the underyling pixels (for microcontrast) they’re looking for shapes (for “sharpness”).
This exaggerated tendency to look for shapes leads to images that — on the pixel level — seem impressionistic rather than detailed.
I suspect that any sharpening algorithm that works on the reverse — looking for what’s different between pixels — will find that Fuji optimized for large smooth areas with a high contrast compared to the next large smooth area — which are the “wormies.”
If you’re a pixel peeper — I highly suggest you convert a few Fuji files through Raw Therapee and take a look at what’s happening on the pixel level — you’ll see Fuji’s optimizing for shapes that may not really exist in the underlying image.