Stochastic Texturing

Jason Booth
4 min readDec 24, 2021


Back in 2019, Erik Heitz and Thomas Deliot release a wonderful paper along with a Unity implementation of an anti-tiling technique called “Procedural Stochastic Texturing by Tiling and Blending”. Link.

Image of the original technique from the Unity blog.

The gist of the paper is that it’s a technique to break tiling artifacts by sampling a texture multiple times from different regions, and blending the samples together using a luminosity preserving operator. As someone who makes a rather in-depth series of terrain shader assets for Unity, anything to prevent tiling patterns is something I’m interested in, so I implemented this into my code base and started playing with the technique.

As presented in the paper, the technique is very complex. It requires a slow preprocessing step to convert your textures into a special format and produce a LUT texture. Special code is needed to handle specific texture compression formats. And the shader must use a second texture as a lookup table to decode the original texture’s values, which slows the shader down by creating a dependent texture read. From a usability and optimization point of view, all of this seemed bad, but the results were pretty compelling. The handling of texture compression formats is a killer though- there are lots of them out there, and requiring DXT is not feasible on many platforms.

After implementing all of this into MicroSplat, I realized that the majority of this complexity comes from the blending operator used by the technique, which is a luminance preserving technique. Essentially it is trying to blend in such a way that the overall luminance of the texture is preserved, instead of, say, doing linear blends that look blurry between regions. While this approach has merit, the cost is just too great, and I realized I could just use a different blending operator and remove most of the complexity of the technique.

Height Blend Operator

A height blend is a very common technique in environment shading. The basic idea that one texture blends into another based on its height map data and a weight for the given textures.

An example of height map blending in MicroSplat. The grass appears in the low sections of the rocks.

Since I’m mostly dealing with terrains, height maps are already used for every texture. But if I don’t have a height map available, I could just use the luminosity of the texture (easily computed in the shader), or one of the texture channels.

The potential upsides of the change are great:

  • No preprocessing of textures
  • No texture compression specific code
  • No LUTs and there for no dependent texture reads
  • Works with existing textures as is

So I got this working in MicroSplat and found that it not only worked well, it generally looks better than the original technique. The reasons for this are:

  • Less pixels are blended, because a height map operator only blends in a small contrast area
  • Blends are correlated between textures. Note that this could have been done in the original technique as well, but wasn’t. In other words, in the original technique the albedo, normal map, and other components could each have different areas where they blended based on their own luminosities. Having these all correlated via a single height map based blend preserves the detail of the original texture better.

There is one downside though. Because this is a height map based blend, it favors the higher part of the texture over the lower areas. This can change the visible frequency of the texture detail, and make data which only appears in the low sections of the texture less common.

Lets examine the original paper’s version vs. my modified version:

The original implementation, which IMO washes out the detail of the original texture
My improved version with the height blend operator, notice the rock details are preserved.
Terrain with traditional tiling textures
The same terrain with Stochastic Texturing w/ Height blend operator. Notice the increased density of rocks caused by the height blend operator.

Overall, I was extremely happy with this improvement to the technique, as it takes a fairly complex and length process and turns it into a one click solution. It also removes all of the editor side code, making it a shader only technique, and removes a lot of the shader code as well.

If you’re interested, you can see this evolve as it was being developed on the Unity Forum Thread about the technique, which also includes comparisons to my Texture Clustering technique. Since this time, this variation of the technique has become the dominant implementation I’ve seen used in Unity and Unreal. MicroSplat, Better Lit Shader, and my Stochastic Texturing for Amplify Shader Editor products all use this technique.

Other improvements

In the time since writing the original technique, I’ve continued to improve the technique. The original technique requires 3 texture samples per pixel for each texture using it, however, with dynamic culling of texture samples, we can get this down to about ~1.5 samples in practice.

On the right is a visualization of the number of samples chosen for each pixel. The darker area is 1 sample, while the lightest area takes 3 samples. This visualization is for the albedo/height sample, with further samples (normal, etc) being further culled by the height map blend. By contrast, the luminosity preserving technique always need to blend all 3 samples, and cannot make this optimization, which is quite significant on a large surface.



Jason Booth

Graphics Engineer, blog mainly about shader techniques used in my Unity assets available on the asset store