Fighting Entropy In Unity — Cameras, and Rendering

Galbartouv
11 min readJan 14, 2023

In this post, I will detail how you can take control of your rendering and cameras. We will learn how to use the scriptable render pipeline (SRP) camera stacking functionality to create a stack that separates are rendering of the 3d objects in our game and the UI in our game. This will enable us to have the ability to improve the performance of our project and reduce visual bugs.

What is a camera?

In Unity, a camera is a component that allows the game or application to display a view of the 3D environment. It has various properties that can be adjusted to control how the scene is rendered, such as field of view, clipping planes, and depth. The camera also has a frustum, which is the pyramid-shaped space that the camera can capture. The frustum, defined by the field of view and the aspect ratio of the camera, determines which objects in the scene will be visible in the final image. Objects outside of the frustum will not be rendered, while objects inside the frustum will be rasterized into an image.

Why is one camera not enough?

If we were to use a single camera for our UI and the 3d world we would encounter a problem. As the camera moves in the world some object will enter the area between the near plane of the camera frustum and the canvas. This will cause 3d object in the world to be rendered in front of the UI which wasn’t what we wanted.

Instead of putting the UI on canvases that are in camera screen space, we can put them in Overlay screen space. But the overlay mode for canvases is a terrible option. It only gives you constraints and no advantages. The overlay mode renders the canvas over everything. It rendered the canvas on the last render queue which is 5000. It doesn’t allow you to render 3d models or particles on top of your UI. This might be fine for a simple bubbles game but for most games nowadays you need to have 3d objects and particles over your UI in many cases.

*technically this is wrong because the overlay isn’t rendered by the camera but use your imagination.

To address this problem, we can implement a dedicated camera for rendering the UI. This allows us to control the layering of elements over and under the UI. To do this we will create a camera stack.

What is a camera Stack?

The way cameras work has changed in the move to the Scriptable Render Pipeline (SRP). The SRP cameras work with a stack. A camera stack works by layering several cameras together for a combined output. The stack has a “Base” camera and “overlay” cameras. The Base is rendered and then on top of it, each overlay is overlayed onto the previous camera’s rendered image.

We want the base camera to be the camera that is responsible for rendering our 3d objects in the game and we will use an overlay camera to render UI on top of it. Now that the UI has its own camera we can put 3d object in front of the UI or behind it and there is no way for the 3d objects of the game itself to be rendered in front of the UI.

As you can see below the Base camera is rendering the orange cubs and the UI camera is rendering the canvases and balls. Even though the cubes are in front of the canvases the canvas is rendered on top.

How do we render 3D over UI?

Now that we have a UI camera we want to add 3d models to be part of our UI. To do this we have to understand two things. The first is that the order that the UI will be drawn will depend on the order of the layers of the canvas, the higher the order in the layer will determine which is drawn first. While visually it looks like the plane distance should be what determines if one canvas will be rendered over the other this is false.

The second thing to understand is the interaction between canvases and 3d objects, which is dependent on the distance from the camera. If the 3d model is closer to the camera than the canvas then it will be drawn first, likewise, if the canvas is closer to the camera it will be drawn above the 3d object. The sorting layer order isn’t relevant in this case.

So in the example, below if UI 1 is at sorting order 0 and UI 2 is at sorting order 10 then we will draw UI1, pink ball, UI2, and blue ball. But if the sorting order is 10 for UI1 and 0 for UI2 then we will draw UI2, pink ball, UI1, and blue ball.

How to create the camera stack?

To create a camera stack we will first need to create a new camera and set its render type to “Base”:

For best performance, we would also like to define what this camera renders so we will choose that it will only render the default layer, though we can create a specific one for it if we choose.

Now we will create an overlay camera for our UI. We create a new camera and set its Render Type to “Overlay”:

For best performance, we would also like to define what this camera renders we will create a UI layer, and set its culling mask to be only UI.

Then we should go back to our Base camera and add the Overlay Ui camera onto its stack:

I’ll note that the term Overlay may confuse you with the canvas Screen Space — Overlay render mode, but the two have nothing to do with each other.

If you have a need for more cameras you can do the same as we did with the UI camera, but please take note that each camera adds more overhead performance-wise. So think about whether or not you actually need it.

Should we use more than one stack?

You can have a different camera stack in each scene. But if you do choose to do this then there are several issues that arise.

Using several stacks means you won’t have centralized control of your rendering. It would be the equivalent of each building having its own power grid and not using a shared one. If you use additive scenes in your project, each time you load a scene with a stack that stack takes over rendering. This causes issues, you have to duplicate stacks and make sure that the changes you make to one stack are the same as the others. This causes a maintenance problem and is prone to bugs.

Using a single stack

So we understand that using a single stack would be the better choice. This will require us to have the stack in a scene that is always loaded. Now we’re facing a new issue. We have created the stack which sits in Scene A, but in scene B we have a canvas that we want to render UI. We created a camera specifically for this but the canvas requires a reference to the camera. Since the camera is sitting in a different scene we can’t reference it, and if we don’t have a camera reference it will revert to Overlay mode.

To solve this we will create an infrastructure that will give us full control of our cameras and canvases and will allow us to set the camera to the canvas via code.

Our Infrastructure

In programming, we strive to have a single responsibility over things in our code. The same goes for our canvases and cameras. If we were to access our cameras and canvases in many places in our code we would create a mess. Instead, we want to have a single point in our code that is responsible for our cameras and one for our canvases. This way no other place in the code needs to know either about the cameras or the canvases, the services will be our facade to accessing all of them. Below I will describe how we are going to implement this:

We Start off with a camera subscriber, this component’s sole purpose is to inform the Camera service that the camera exists and what type of camera it is. We use the Unity Life Cycle events to make sure to subscribe and unsubscribe with the creation and destruction of the game object.

[RequireComponent(typeof(Camera))]
public class CameraServiceSubscriber : MonoBehaviour
{
[SerializeField] private CameraType _cameraType;
private ICameraServiceSubscription _cameraServiceSubscription;

private void Awake()
{
var thisCamera = GetComponent<Camera>();
_cameraServiceSubscription = CameraService.CameraSubscriptionInstance;
_cameraServiceSubscription.SubscribeCamera(_cameraType, thisCamera);
}
private void OnDestroy()
{
_cameraServiceSubscription?.UnsubscribeCamera(_cameraType);
}
}

Similar to the camera subscriber we create a subscriber for canvases, this component’s sole purpose is to inform the Canvas service that the canvas exists and give it tags(will go into detail later about why we need these tags).

public class CanvasSubscriber : MonoBehaviour
{
[SerializeField] private List<CanvasTagType> _canvasTagTypes;

private ICanvasServiceSubscription _canvasServiceSubscription;
private Canvas _canvas;
private CanvasSubscriberData _canvasSubscriberData;

private void Awake()
{
_canvas = gameObject.GetComponent<Canvas>();
_canvasServiceSubscription = CanvasService.CanvasServiceSubscriptionInstance;
_canvasSubscriberData = new CanvasSubscriberData(_canvas, _canvasTagTypes);
_canvasServiceSubscription.SubscribeCanvas(_canvasSubscriberData);
}
private void OnDestroy()
{
_canvasServiceSubscription?.UnsubscribeCanvas(_canvasSubscriberData);
}
}

Now here is where we are easily able to link the canvas with the camera. When we call the SubscribeCanvas function in the canvas service it calls SetCanvasCamera function in the camera service.

public void SubscribeCanvas(CanvasSubscriberData canvasSubscriberData)
{
if (_canvasSubscribers.Contains(canvasSubscriberData))
{
Debug.LogError($"canvas: {canvasSubscriberData.Canvas.name} is already subscibed");
return;
}
_canvasSubscribers.Add(canvasSubscriberData);
_cameraService.SetCanvasCamera(CameraType.Ui, canvasSubscriberData.Canvas);
}

The beauty of doing it this way is that any time a canvas is created either via prefab or scene it will be added to canvas service and by doing so receive a camera. Likewise any time a canvas is destroyed either by the destruction of gameobject or by unloading a scene we make sure that it is unsubscribed from the canvas service removing the danger of memory leaks.

How does this improve our control of rendering?

A common mistake I have seen in many projects is the misunderstanding that just because you can’t see something means that it’s not being rendered. Unfortunately, this is far from the case. Imagine that you have a rich 3d world in your game and then you load a nice UI shop over it. You might be tempted into thinking that just because the shop has covered the world that the world isn’t being rendered. You would be mistaken. The other thing is that just because the shop UI is in the front doesn’t mean that another UI underneath it isn’t being rendered.

Good thing we have our services that control all the canvases and all the cameras.

When we have UI covering the screen we want to turn off the base camera so it will not render the 3d objects. But we can’t just disable it, because of how the camera stack works if we disable the base camera the overlays will stop rendering as well. Instead, we will need to remove all the layers in the base camera culling mask. For that we have the following code in the camera service:

public void TurnCameraCullingMasksOn(CameraType type)
{
if (!_cameras.TryGetValue(type, out var camera))
{
Debug.LogError("camera not found");
return;
}
camera.cullingMask = _camerasCullingMaks[type];
}

public void TurnCameraCullingMasksOff(CameraType type)
{
if (!_cameras.TryGetValue(type, out var camera))
{
Debug.LogError("camera not found");
return;
}
camera.cullingMask = 0;
}

To stop rendering unneeded UI we have the following code in our canvas service:

public CanvasRenderLock StopRenderingAllExcept(List<CanvasTagType> canvasTagTypes)
{
var canvasesToLock = Enum.GetValues(typeof(CanvasTagType)).Cast<CanvasTagType>().ToList();
foreach (var canvasTagType in canvasTagTypes)
{
canvasesToLock.Remove(canvasTagType);
}
var canvasLock = new CanvasRenderLock(canvasesToLock);
foreach (var canvasType in canvasesToLock)
{
_renderLocks[canvasType].Add(canvasLock.Guid);
}
UpdateRenderingOfSubscribers();
return canvasLock;
}

public void UnlockRenderLock(CanvasRenderLock canvasRenderLock)
{
if (canvasRenderLock == null)
{
Debug.LogError("canvasRenderLock is null");
return;
}

foreach (var canvasTagType in canvasRenderLock.CanvasesToLockList)
{
if (_renderLocks[canvasTagType] != null)
{
_renderLocks[canvasTagType].Remove(canvasRenderLock.Guid);
}
}
UpdateRenderingOfSubscribers();
}

private void UpdateRenderingOfSubscribers()
{
foreach (var canvasSubscriberData in _canvasSubscribers)
{
var shouldRender = true;
foreach (var tag in canvasSubscriberData.Tags)
{
var tagHasLockOnIt = _renderLocks[tag].Count != 0;
if (tagHasLockOnIt)
{
shouldRender = false;
}
else
{
//if subscriber has a tag which doesnt have a lock then the subscriber will be rendered.
shouldRender = true;
break;
}
}
canvasSubscriberData.Canvas.enabled = shouldRender;
}
}
public class CanvasRenderLock
{
public readonly string Guid;
public readonly List<CanvasTagType> CanvasesToLockList;

public CanvasRenderLock(List<CanvasTagType> canvasesToLockList)
{
Guid = System.Guid.NewGuid().ToString();
CanvasesToLockList = canvasesToLockList;
}
}

We can call StopRenderingAllExcept which will go over every canvas we have and check its tags if the canvas doesn’t have the tag we fed the function it disables the canvas render. Note that we turn off the rendering and not the canvas gameobject because if we were to turn off the game object and turn it on again later it would be set as dirty and the canvas would have to be rebuilt. When we want to turn the disabled canvas back on we call UnlockRenderLock with the CanvasRenderLock object we received when calling StopRenderingAllExcept.

TL;DR:

Create a single camera stack for all scenes in the game. Have a base camera for 3d objects and an overlay camera for UI. This way you can have 3d models and particles in your UI. Create a service that is responsible for all canvases and create a service that is responsible for all cameras. With those services turn of rendering of things are rendered but aren’t visible. That pretty much covers it.

--

--