Part 2: Object Outline

Erikkubiak
14 min readJan 6, 2023

--

In this serie of articles, we focus on how to reproduce a medieval drawing Post Processing in Unity using URP and Custom Render Feature, some C# code, some shader code and a lots of magic 👨‍🎨

Part 2: Object Outline

In this part, we will take advantage of what we did earlier to provide some neat object outlines 💪 It will mainly based on Alexander Ameye’s article and I advise you to read it. Or at least the introduction, as we will need it later.

We will:

  • Understand what are color, depth and normal textures
  • Include new parameters to our Post Processing
  • Modify the first and last pass of our shader

1. Using color, depth and normal textures

In order to detect discontinuities in our image, we will need to use the three textures stated above. A perfect explanation of why we need them is given in Alexander’s article, so give it a read 😉

Color texture

This texture is basically what has been renderer so far, so what we would see without any of our Post Processing. It will provide us outlines for different colors objects but almost equal in terms of depth and normal. It is really useful for shadows outlining or small elements.

If you remember what we did in the previous article, we already provide this texture to our shader.

var target = renderingData.cameraData.renderer.cameraColorTarget;
...
Blit(cmd, target, m_TempCameraRT, m_DrawMaterial, 0);

We first store the id of the camera color target and when we blit we use it as the source. So in our shader _MainTex is the camera color target. Thus when we execute the code below, we only read the camera color and display it:

fixed4 col = tex2D(_MainTex, i.uv);
return col;

Depth Texture

The Depth texture is storing a really important information called depth. It is a value between 0 and 1, that stores how far an object is from the camera. It is really useful for Unity as it will allow it to discard objects that are behind others, and also for Tech Artist to do some really nice effects such as Fog, depth fade, etc.

I won’t go into details and advise you to read this amazing tutorial from Cyanilux, it will provide you any piece of info you will need 😉but don’t worry, I will explain you how to read and use it 💪

Normal Texture

You may know that surfaces has Normal Vectors. It is a vector orthogonal to the surface and is really useful for shading and other effects. If you don’t know what it is, I would advise you to give a read to find an article.

We can easily store them in textures as it is only three components so it can be the RGB.

2. Color based Outlines

In order to “see” discontinuity, we need to check the pixel around us. What interests us are the corner of a 8-neighborhood:

To achieve this, we will need:

  • An array to store the samples
  • The distance to the current pixel
  • The offsets used to read around

Array of samples

Declaring an array in hlsl is pretty simple and looks like in a lot of languages:

type name[size];

So here, our samples will be float3 as a color has three components, and there will be 4 samples so a size of 4. We will call it colorSamples, as we should always give clear names to our variables for coworkers or future-self 😉 So here is the declaration:

float3 colorSamples[4];

Distance to the current pixel

Now, we will need a float variable that will represents the outline width and will be used when reading the samples. If you remember when we read a texture, we use the uv which is a float2 between 0 and 1. What we wants there, is a formula to compute the pixel offset but in the range 0 to 1. We can use the rule of third for this, with p the number of pixels wanted, wthe texture width and x the searched value:

It can be translated to:

We can do the same for the height so we now have our formula for both the offset in x and in y (with h the height):

As you can see, we will need the width and the height of our texture. But not really, as you may know the division is a pretty heavy computation for our poor computer. There it would be done twice for each pixel of our texture. Let’s think smarter, what about computing once the invert and then do a multiplication instead? Hopefully, Unity provides for each texture those informations:

sampler2D _MainTex;
float4 _MainTex_TexelSize;

Here just below the declaration of our texture, we declare a float4 that has the same name that our texture but with “_TexelSize” at the end. This float4 has those values:

Adding an Outline Width variable

Now that we know how to compute our pixel distance, we need it accessible in our shader !

First let’s define it in our PassSettings stored in GravurePPRenderFeature. If you don’t remember it, I advise you to read again the first part about setting up our project. So now our class should look something like that:

[Serializable]
public class PassSettings
{
public float OutlineSize;
}

We added a public float variable called OutlineSize. Normally, I would avoid public variable but here that’s fine for this tutorial. But remember, there you indicate that anyone in the code can modify it.

Then we need to feed it to our shader. Let’s get into our pass constructor and send it:

public GravurePPPass(GravurePPRenderFeature.PassSettings settings,
GravurePPPass _clone)
{
//...
m_DrawMaterial = CoreUtils.CreateEngineMaterial("Hidden/EngravingPP");
m_DrawMaterial.SetFloat("_OutlineSize", settings.OutlineSize);
}

As you can see, we send it just after creating our material. We use the SetFloat method for that, it just says to the GPU that the variable called _OutlineSize will have this value. Unfortunately, it is not bound automatically, so remember of updating the material if the CPU value changes.

We finally need to declare it in our shader. It will be done in two parts. First, we add it to the properties block in the top:

Shader "Hidden/GravurePP"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_OutlineSize("Outline Size", Float) = 2
}
//...
}

We keep the texture and add our new property. The first id is the name on the GPU side that we used to set the value on the material. In parenthesis, we have the display name and the type (here Float). Then the default value I set up at 2.

Now, we will declare it in the HLSL code to be able to use it. In our first pass, above the frag function, adds the code below and we will be able to use it.

float _OutlineSize;

Computing the neighbors positions

Now, you normally have everything to compute the neighbors positions. First let’s create a float2 vector that will represent our offset in the range 0 → 1. We just take the first components of Texel Size and multiply it by the Outline Size. We now have a size normalized according to the texture dimensions:

float2 offset = _MainTex_TexelSize.xy * _OutlineSize;

Now, here is the code to compute the 4 values:

float2 uvSamples[4];
uvSamples[0] = i.uv + offset;
uvSamples[1] = i.uv - offset;
uvSamples[2] = i.uv + float2(-offset.x, offset.y);
uvSamples[3] = i.uv + float2(offset.x, -offset.y);

Reading camera color

You have your camera color, you have the neighbors positions in uv space… Yeah, you are right, you have everything for our color outline 😄 First, let’s sample the colors:

for(int i = 0; i < 4 ; i++)
{
colorSamples[i] = tex2D(_MainTex, uvSamples[i]);
}

Here we, just loop through our neighbors positions, read the main texture and store back to the colorSamples array we spoke of earlier.

Computing the outline

First, we need another property called _Sensitivity, that we will use to set up our discontinuity detection. I let you add it to the PassSettings, the shader properties and declaring it in the pass.

Now, let’s use the code provided by Alexander Ameye, don’t worry we will explain it:

float3 colorFiniteDifference0 = colorSamples[1] - colorSamples[0];
float3 colorFiniteDifference1 = colorSamples[3] - colorSamples[2];
float edgeColor = sqrt(
dot(colorFiniteDifference0, colorFiniteDifference0) +
dot(colorFiniteDifference1, colorFiniteDifference1)
);
edgeColor = edgeColor > (1/_Sensitivity) ? 1 : 0;

First of all, we compute the finite difference on the two axes. The term finite difference may be unknown to you but it is basically the derivative on a discrete space, we just do a subtraction between two almost continuguous elements.

Then we compute the distance between the finite difference. The dot allows us to compute the squared length of the color in an effective way. This formula allows us to convert the float3 to a single float. Finally, we compare it to our sensitivity, under a threshold we return a 1 and else a 0.

If you return it with the code above, you should now see the outline:

return fixed4(edgeColor, edgeColor, edgeColor, 1);

For now, the result may not be amazing and that’s pretty normal. First, try changing the Sensitivity setting on your Render Feature (remember the URP settings we spoke of in the previous part? That’s it 😉). Also, it is based on color, so if everything is white like in my scene you won’t get something amazing so feel free to change a bit the colors and see what if you get better results.

3. Displaying outline and camera color

Now that you have your outline color, we will use it to blend between an outline color and another one. We will use it as a mask as white is the outline and black is no outline. We will see a very useful operation there, known as lerp.

Lerp is standing for linear interpolation, what a complicated word for something really simple. It is a function that takes three arguments, a min, a max and an interpolater:

lerp(min, max, t);

It is a pretty simple but such a useful operation, if t equal to 0 it returns you min, if t equal to 1 it returns max, else it gives you an interpolated and thus smoothed value between the bounds. The other advantage is that it is pretty cheap compared to doing it yourself with an if. It can be used on a lot of datatypes such as float or even float3, etc. so it can even be used to smooth out between colors 😉

You may not know it already so it can be a bit hard to understand what it is doing so here is an amazing tweet from Freya Holmér:

You may see the usage of it, we will use it to blend between our Outline Color and another one using the outline texture as a mask.

Color Property

First of all, we need to define our Outline Color. So let’s start by modifying ou PassSettings class:

[Serializable]
public class PassSettings
{
public float OutlineSize;
public float Sensitivity;
public Color OutlineColor;
}

Now, we will add to our shader properties like we did earlier, it is almost the same just now the default value is black:

Properties
{
...
_OutlineColor("Outline Color", Color) = (0, 0, 0, 1)
}

Finally, we just send it to the GPU by changing again our material in the constructor:


m_DrawMaterial.SetColor("_OutlineColor", settings.OutlineColor);

Lerping between the colors

Now that our shader is configured on our GPU, let’s define it in the color handling pass, the last of our shader.

float4 _OutlineColor;

void frag(...) {
...
}

If you remember our Pass class, in the Execute method we execute this pass at the end, you may check in the previous part. This Blit takes the Outline mask as input so we just need to read the Main Tex to get it. We will use it to lerp between our outline color and another one:

void frag(...) 
float outlineMask = tex2D(_MainTex, i.uv).r;
float4 sceneColor = float4(1, 1, 1, 1);
return lerp(sceneColor, _OutlineColor, outlineMask);
}

As you can see, the code is really simple:

  • We read the main tex which is the outline mask and we just takes the first component as they are all the same (excluding alpha).
  • We define a constant color which is white, I would advise you to create a property like we did for Outline Color.
  • We lerp between our scene color and our outline color, the interpolator is the outline mask.

Normally, this simple code should now give you a white image with the outlines drawn as black.

Using the camera color texture

So far, it is pretty simple we just have a black and white texture and it fits our need but you may ask yourself, but what if I want to outline a camera color? So here is the answer, we first need to get the camera color texture as it is not the input this time and replace our scene color.

Hopefully, we can access textures thanks to their names and in URP the camera color texture can be accessed in that way, just by declaring it before the frag function:

sampler2D _CameraColorTexture;

Now, we just need to read this texture instead of using a constant scene color:

void frag(...) 
float outlineMask = tex2D(_MainTex, i.uv).r;
float4 sceneColor = tex2D(_CameraColorTexture, i.uv);
return lerp(sceneColor, _OutlineColor, outlineMask);
}

4. Reading depth and normal values

Remark: Apparently, we don’t need to do what we are going to do below as it is accessible in URP now, but I haven’t found good resources on it so we will go in the old but working way 💪

As Alexander Ameye describe it very well, I won’t reexplain what he did. So I invite you to do his tutorial from DepthNormals Texture part to Scriptable Renderer Feature(included).

Now, you should have access to the textures we need, so again we will declare it just besides our Camera color texture:

sampler2D _CameraColorTexture;

sampler2D _CameraDepthTexture;

sampler2D _CameraDepthNormalsTexture;

Now, you may read directly those two textures but if you read carefully the linked tutorial you understand that there is an issue for the normals as they are encoded in a special way. So let’s add this function right before our frag function:

float3 DecodeNormal(float4 enc)
{
float kScale = 1.7777;
float3 nn = enc.xyz*float3(2*kScale,2*kScale,0) + float3(-kScale,-kScale,1);
float g = 2.0 / dot(nn.xyz,nn.xyz);
float3 n;
n.xy = g*nn.xy;
n.z = g-1;
return n;
}

Now, we will be able to read everything and store it in array like we did for the colors:

float depthSamples[4];
float3 normalSamples[4], colorSamples[4];
for(int i = 0; i < 4 ; i++)
{
depthSamples[i] = tex2D(_CameraDepthTexture, uvSamples[i]).r;
normalSamples[i] = DecodeNormal(tex2D(_CameraDepthNormalsTexture, uvSamples[i]));
colorSamples[i] = tex2D(_MainTex, uvSamples[i]);
}

As you can see it is pretty similar, but there is two differences we could see:

  • For the depth samples, as it is only one value we get and store only red channel.
  • For the normal samples, we read our texture and use the DecodeNormal function to convert it to the right range.

5. Depth and normal outlines

Now, that you have all the samples the other changes are pretty straightforward as it is almost the same code. But first of all, do you remember our Sensitivity? We would need it for color, depth and normal so let’s convert it to a Vector3 everywhere. First in the PassSettings:

[Serializable]
public class PassSettings
{
...
[Tooltip("Color, Depth, Normal")]
public Vector3 Sensitivity;
...
}

As you can see, I added an Attribute on the Vector3, it allows Unity to display the given string as a tooltip when we hover the attribute. It is used as pretty simple documentation there. Next, we need to modify how we send it to the GPU so in the Pass constructor, we now have:

m_DrawMaterial.SetVector("_Sensitivity", m_Settings.Sensitivity);

Just a quick remark on this one, in Unity all the vector we sent to the GPU through materials are in 4 dimension, hopefully there is an implicit cast from Vector3 to Vector4. Thus, when we call it, it is automatically converted with 0 as w.

Now let’s change our shader. First, we need to modify the property section:

_Sensitivity("Sensitivity", Vector) = (1, 1, 1, 1)

Then we need to modify, how we declare it in the shader to keep the type consistents. As you can see, I declare it as a float3, our GPU will just discard the fourth component. I also added a comment to specify which component is what. It is not the prettiest code but it works for now:

// Color, Depth, Normal
float3 _Sensitivity;

The last step is to change the color outline detection code by just using the x component on the Sensitivity vector:

float3 colorFiniteDifference0 = colorSamples[1] - colorSamples[0];
float3 colorFiniteDifference1 = colorSamples[3] - colorSamples[2];
float edgeColor = sqrt(
dot(colorFiniteDifference0, colorFiniteDifference0) +
dot(colorFiniteDifference1, colorFiniteDifference1)
);
edgeColor = edgeColor > (1/_Sensitivity.x) ? 1 : 0;

Normally, it should stay consistent and gives the same result. It took some time but now let’s outline our normals 💪 it is almost the exact same code that for color as they are both a 3 dimensional vector:

float3 normalFiniteDifference0 = normalSamples[1] - normalSamples[0];
float3 normalFiniteDifference1 = normalSamples[3] - normalSamples[2];
float edgeNormal = sqrt(
dot(normalFiniteDifference0, normalFiniteDifference0) +
dot(normalFiniteDifference1, normalFiniteDifference1)
);
edgeNormal = edgeNormal > (1/_Sensitivity.z) ? 1 : 0;

The code for the depth is a bit different as it is only a float, so instead of using dot on each finite difference we will just square it. It is basically the distance formula. We also add some code to detect the discontinuities. We multiply by the first sample to have a value to compare to I think:

float depthFiniteDifference0 = depthSamples[1] - depthSamples[0];
float depthFiniteDifference1 = depthSamples[3] - depthSamples[2];
float edgeDepth = sqrt(
pow(depthFiniteDifference0, 2) +
pow(depthFiniteDifference1, 2)
) * 100;
float depthThreshold = (1/_Sensitivity.y) * depthSamples[0];
edgeDepth = edgeDepth > depthThreshold ? 1 : 0;

We are almost done, we computed our mask for both color, normal and depth so congrats 🎉 we just need to combine all of it. It is really simple, we will just max out everything. The max function returns the maximum between two elements. In our outline mask, 1 stands for there is an outline it is perfect. Indeed, if both are 0 it will return 0 but if any is 1 it will return 1.

So let’s replace the last line of our frag function:

float edge = max(edgeDepth, max(edgeNormal, edgeColor));
return fixed4(edge, edge, edge, 1);

Now you should have all the outlines combined. Feel free to play with the sensitivity, game view and models to see what is changing.

Conclusion

You have some amazing outlines now 😄 this effect is often used for stylised Artistic Direction and is really a great base for some fine Art. But we will continue in the next part by displacing our outlines to give it a more hand drawn style 💪

You can follow me for more 😍

Thanks

I would like to thanks the people that helped me writing this article by reviewing or providing any help:

--

--