Improved Triplanar projections

Jason Booth
4 min readDec 24, 2021

--

Triplanar Projection is a common technique to handle texturing models when UV’s are not available or appropriate. It’s also very commonly used on height mapped based terrains to prevent stretching on cliffs. However, this can create a different set of issues because the normals on a terrain are usually smooth.

Notice the difference between the (hard) face normals and the (smoothed) vertex normals on a terrain bump. For triplanar texturing, we want to select the best projection to use for a given pixel, but if you look at the smoothed normals on the cliff’s edge, notice that they never face perpendicular to the geometry, but rather face too far up due to the interpolation. Also notice that the smooth normals across the top blend from 45 degrees in one direction to 45 degrees in the other direction.

In practice, these kinds of angle differences can create a smearing effect from the wrong projection being used:

A trivial fix would be to use the face normals for selection. However, face normals are usually not stored in a game engine, but we can compute them easily enough using the screen space derivatives of the world position of the pixel:

float3 dx = ddx(worldPos);
float3 dy = ddy(worldPos);
float3 flatNormal = normalize(cross(dy, dx));

However, selection based on the face normal creates a new problem, a discontinuity between the two faces:

Notice the sharp break down the center between the two texture projections

What we really want is some way to blend between these two projections, using the smoothed normal along the edge of the geometry, but the face normal for the center of the geometry.

An ode to Barycentric coordinates

Barycentric coordinates are used internally by a GPU to blend values across a triangle face. They are wonderfully useful things, however, they have not been exposed to fragment shaders until very recently (DX12) and are not available on every graphics API. However, there are a few common ways to get around this. The first is to bake this data into your vertices, in the color or UV components of your mesh. However, this can require duplicating vertices and preprocessing the mesh, which is annoying. You can alternatively use Geometry shaders to add them at runtime, but Geometry shaders are not available on every API and generally a bad idea to use anyway.

However, on a height mapped terrain, we can generate them ourselves based on the known topology we are working with. In this case, we know our terrain is created from a texture of a known size, say 1024x1024, with a 0–1 UV space. We also know that the triangulation is always going to follow a fixed pattern. Thus, we can use this to generate “virtual triangles”, and from that, determine our barycentric coordinates for the given virtual triangle.

// unity shader codefloat2 texSize = _Control0_TexelSize.zw; // size of texture
float2 stp = _Control0_TexelSize.xy; // size of texel
// create a virtual quad for the pixel
float2 stepped = uv * texSize;
float2 uvBottom = floor(stepped);
float2 uvFrac = frac(stepped);
uvBottom /= texSize;
uvBottom += stp * 0.5;
// select virtual triangle from virtual quadfloat2 cuv0, cuv1, cuv2;
if (uvFrac.x > uvFrac.y)
{
cuv0 = uvBottom;
cuv1 = uvBottom + float2(stp.x, 0);
cuv2 = uvBottom + float2(stp.x, stp.y);
}
else
{
cuv0 = uvBottom;
cuv1 = uvBottom + float2(0, stp.y);
cuv2 = uvBottom + float2(stp.x, stp.y);
}
// our position in virtual triangle
float2 p = uvFrac * stp + uvBottom;
// generate barycentric coordinates
float2 v0 = b - a;
float2 v1 = c - a;
float2 v2 = p - a;
float d00 = dot(v0, v0);
float d01 = dot(v0, v1);
float d11 = dot(v1, v1);
float d20 = dot(v2, v0);
float d21 = dot(v2, v1);
float denom = d00 * d11 - d01 * d01;
float v = (d11 * d20 - d01 * d21) / denom;
float w = (d00 * d21 - d01 * d20) / denom;
float u = 1.0f - v - w;
return float3(u, v, w);

Now we have proper barycentric coordinates for a pixel, without using semantics, geometry shaders, or baking data into our vertices, we can use these to blend between the smoothed and flat normal

// min of the barycentric coordinates is how close to an edge we are
float mb = min(bary.x, min(bary.y, bary.z));
mb = saturate(mb * _BaryContrast);
// now blend the normal
float3 normal = lerp(vertexNormal, flatNormal, mb);
Here we see the same case with a tight blend across the edge, preventing the discontinuity between the faces
Adjusting the blend edge blend area

--

--

Jason Booth

Graphics Engineer, blog mainly about shader techniques used in my Unity assets available on the asset store