Pathfinding Like A King — Part 2

IronEqual
IronEqual
Published in
9 min readSep 2, 2017

Three metods used in pathfinding on 3 axis in games.

HEY!

As the “Part 2” in this article title says, you are reading the second part of an article about Entities Pathfinding in games. If you didn’t read the first part I invite you to do so here.

If you read it already or don’t care, let’s continue our journey inside the world of 3D movements with three ways to achieve entity movement on a 3rd axis. Again, as in part 1, I’m talking about movements that are calculated for entities like AIs or any other type of entity that needs a logical/calculated movement.

Introduction

From what I found about movements on the 3rd axis in Games, the need came around 1990–1994 with games like Descent. This new type of game called “6 DOF” for 6 Degrees Of Freedom needed to have a way to move entities that are not controlled by the player in a logical way. Therefore, a new type of navigation system was needed, as older ones where limited to planes as we saw in Part 1 of this article. We will study how they managed to solve this technical challenge.

Like Descent , other games had the same type of needs but used other solutions.

Today, there are only a few games with a need for this type of movement system. Only games taking place in space, plane games or games with drones need something like this. It’s complex enough to be used only if it’s the only solution available.

We’ll take a look at three ways to do this type of movement. Order is not important as they all have pros and cons.

Let’s start with Volume Navigation

Volume Navigation

This one is very close to what we already saw with 2D movements. Indeed, an algorithm like the A* I talked about in Part 1 works in a graph of nodes next to each other. Meaning that if we can create the same situation in 3D, we should be able to use this algorithm with no problem. That’s what Navigation Volumes are.

A Navigation Volume is a 3D grid of blocks used as nodes in a volume. We split the environment into a grid that is invisible to the player. This grid makes it possible to use the same type of logic than on a 2D grid. If a “blocking” element is inside one of these blocks, the block is considered as a blocked node. As in 2D pathfinding, a cost can be attributed to each node.

This 3D grid is often called “Voxel Grid”. A Voxel being a volumetric element often presented as the equivalent as a pixel in 3D, but this last one may not contain color datas.

Here, each block in this Voxel Grid contains data invisible to the player.

In this screenshot we can see a visual representation of a Voxel Grid. On this one, Green blocks represent non blocking voxels/nodes and red ones are those where there is a collision where entities should not move inside nor go through. This grid is fixed in terms of size, other use voxels that change in size depending of the scene/geometry complexity at different positions. Using dynamic sized voxel can greatly improve compute time by making the number of node lighter in empty spaces for example. The precision of such a system is directly tied to the size of the voxels on the grid but so is performance.

Indeed, compared to a Navmesh where faces can easily be calculated to cover precisely the surface usable for navigation, a block a bit too big in a voxel grid can create an incorrect navigation zone.

To create a voxel grid with different sizes of voxels (dynamically or not), a specific type of nodes tree is used, an “Octree”. In this tree, each node has eight childs, making it possible to divide the space into eight, more precise little spaces. This can be done multiple times until we get the desired precision.

Like with a Navmesh, a Voxel Grid can be generated at runtime or pre-computed depending of the game needs. It can also be dynamic, making it possible to update the block/nodes depending of moving entities inside them.

You can see just here a representation of a path calculated on a voxel grid.

This is running on UE4 with this plugin.

This type of system is really performant inside fixed and not too complex volumes. Indeed, using an A* algorithm, the path complexity on the grid expands exponentially the compute time.

Today, not many games use this type of system.

Another similar technique uses tetrahedra instead of voxels to fill a volume like faces do on a 2D surface. In a similar way, a classic A* algorithm is then used to find a way on the tetrahedra faces. This is more precise than a voxel based solution but requires too much compute power/time to be interesting in games. Finally, some voxel-based games (like Minecraft or other ones) can easily implement this type of navigation solution without even adding a new voxel grid as they can add datas inside their existing one to be used in navigation.

Let’s take a look at a second type of solution for 3D movements shall we?

Autonomous Navigation

This one uses another, totally different approach to logical movements of entities in a space. Instead of using computed or pre-computed datas and only follow calculations results, the entity has multiple virtual inputs/sensors enabling it to do dynamic movement computations at runtime. We are closer to how robots works inside the real world where sensors can make them react to their environments. This is the solution used in most “6 DOF” games I talked about earlier. In those games, there is no surface or rotation that can dictate the navigation volume and a navigation volume is not realistic at the time in term of performance.

The idea to use a tool already used in games, but for other things emerges. With Triggers and Raycasts, which are mostly used for interactions between the game’s logic and the player, we can get enough datas about the entities surrounding to make it possible to move it in a logical way and dodge obstacles.

Debug view from SIGTRAP on the movements in “SublevelZero”, a 6 DOF game

Here, you can see . This visualization show us how this movement solution works. Here entities are moving to an objective by using multiples Raycasts to detect elements around them.

This can be used in games where the environment is not too complex and where entity movements cannot be supported on other systems. Another example for this type of solution would be a Plane fighting game or space game where entities can move around in a quite empty large space where there are only some obstacles (most of the time dynamic) like other planes or asteroids for example.

You can see right here another example with four “urgency” triggers in front of the entity. It makes it possible to correct the trajectory of the entity at the last moment if something gets inside one of these triggers

Using locals inputs makes it easier to orbit around another element in the world than when using other movement solutions. Instead of calculating a trajectory, when can simply add forces to maintain the entity to a specific distance from another.

You can see right here a ship using multiple moving raycasts to detect obstacles but also to use them as landmark for rotation.

The direct cost of computation is really low compared to a more classic solution. Indeed, the full path is never calculated and is not updated for reasons that are not directly affecting the entity moving. But, the compute cost is exponential with ne number of entities, where other solutions could be optimized by making multiple entities take the same path for example.

Moreover, this system needs a destination to work, but if it’s opposed to the path needed to get to this destination (like we saw in part 1 with A*), it’s just unusable. A really complex system or using another movement solution combined to this one is needed for such complex paths.

However, this solution is really malleable and does not rely on fixed conditions like other algorithmic based solutions. For example, the size of the volume/space is not important, same for the disposition (up/down/left/right). It’s the best solution for plane or spaceship games. It’s often used with another system to get a basic pathfinding to help with more coherent paths.

Now let’s talk about the last type we’ll see in this article. Navigation by projection.

Projection Based Navigation

This is one of the most used type of system for 3D movements in games today. This part will mention a lot of the same things I talked about in Part 1. Indeed, navigation by projection is basically projecting the position of entities on a navmesh to make every path calculation like in a more classic game, and adding a bit of the Autonomous solution to move on the vertical axis. All the pathfinding part is done like in any other game that uses pathfinding.

For the projected axis, only a few sensors/inputs are needed (compared to a fully autonomous solution). The entity only needs to have datas about the obstacles coming on its path. Other datas like distance to the ground can be useful to make it move depending of different parameters.

Right here you can see an implementation of a projection based solution for a drone in MAZE. For the path itself, it as all the characteristic of other system using Navmeshes. A boxcast and a few raycasts make the drone move on the vertical axis to make it dodge the walls and other obstacles and then, put it back to a basic height.

As it’s using a navmesh, this solution works only in situation where an environment can be used as a plane of projection. This one is not forced to be fixed. Indeed, with some navmesh solution like the one in Unity, it’s really flexible. Any axis can be the axis of projection and they can be fully dynamic.

We can see the Unity 5.6 dynamic navmesh system. Here, the entity is a classic one moving directly on the navmeshes. But, this gives you a great idea of how flexible this can be.

For the pathfinding part, it’s exactly like a classic 2D pathfinding.

This solution is the more common. Every engine already has some sort of navmesh navigation solution, implementing a projection system is the easiest way to get movements on 3 axis. Often, a 3D moving entity is not important enough to invest time into a more complex solution that will have more constraints and be harder to implement.

You could even project obstacles on the navmesh to directly dodge them during the pathfinding, making this system as performant in 2D than in 3D!

Of course it’s not perfect, the precision on the 3 axis, like for autonomous navigation, depends on the sensors the entity has. It will always be less precise on the projected axis than on the other two. The level design must be done with the projection part in head, to make sure everything will be coherent. And finally, this solution separates the visual to the path calculation part which could be a pain in some situations.

HERE WE ARE. You read everything I had to tell you about how to move an entity on 3 axis. None of these methods are perfect but one may fit your next game or your current one. We tried all of them for MAZE. We are using the projection one for the moment but, it’s only for our prototype and we’ll see what we’ll use in the final game.

I hope you liked this second part! Please give us your feedbacks, we are not experts on this specific domain and what you read on this article comes from what we learned by doing games and trying to find solutions to our problems. We hope this can help others too.

--

--

IronEqual
IronEqual

Indie gaming studio. Bringing you awesome content randomly!