The Problem With Character Controllers In Unity3D And How I Solved It

Marius Holstad
MariusGames
Published in
6 min readOct 25, 2015

--

A deep dive into character controllers in Unity 5, the “CharacterController” component and design principles behind 3rd-person controllers.

The following will be an explanation of the thinking and philosophies behind the controller, how it was implemented and hurdles met on the way. What ever device you are on, as long as you are using a fairly modern browser, you will be able to play a WebGL version of the character controller later in this post. (Okay, almost all devices. You can probably load the game on your phone, but you’ll need a keyboard to move the character.)

Also, I highly recommend checking out my podcast & transcript:

A quick overview of the project:
- this is a one man project done by me
- on the project I acted as a game designer and C# scripter / programmer
- it was done in the Unity 5.2 game engine
- scripts were written in C# using the Sublime Text editor
- the controller was designed to work on any kind of platform
- play the game later in this post

Intention

In a world where characters in three-dimensional space is the focus of the game, whoever knows how to make a character controller is king.

The aim of this project was to study and create a superb 3D controller for a human, animal or creature-like character. Some of the initial research I did revealed very quickly that the physics engine is a constant battle for character controllers. I would argue that at the most fundamental level there are two different types of controllers; physics based and non-physics based. For our purposes a non-physics based controller would be the best fit, because this type of controller puts the game designer in charge and makes it possible to create precise and tight controls that feel right.

3D maths, vector operations, programming methods and techniques — there is a lot of knowledge involved in making a character controller. Creating a non-physics based controller means that all desired physics based behaviour will have to be added manually in code, rather than handled by the physics engine. But often, we don’t want realistic physics — we want stylised physics. The following part describes in detail my thought process and some of the design decisions I’ve made; starting with basic principles and gradually moving into the more complex behaviour of the controller.

Design & Development

The purpose of a character controller is to move an avatar from one position to another in a 3D world. The input (whatever source) comes in the form of a three dimensional vector (Vector3) with its X and Z coordinates indicating which direction we want to move the avatar in. Rolling hillsides are almost impossible to climb if we interpret the input directly. This is because the input does not know what angle or surface the avatar is standing on. The input is therefore projected onto the ground normal so that the avatar can move up and down slopes without any impact on the speed. The avatar also recieves a small force to keep itself grounded while moving.

No movement starts and stops instantly. I wanted this movement to be smooth, so an adjustable acceleration / deceleration variable was added to make the start and stop of movement more natural.

The cube on the left starts and stops instantly, an acceleration / deceleration variable was added to the cube on the right.

The same principle applies to the rotation of the avatar. The rotation is made to follow the forward direction of the avatar’s velocity (the movement direction and speed) with a limitation on how fast it can rotate.

The cube gradually rotates to the direction it is moving in.

Ray casts are useful for getting information about nearby objects (colliders.) A ray is cast from a starting point in a direction and will return the first intersecting collider. It can give information about which surface (or normal) was hit, in addition to other details. A character controller is responsible for stopping the avatar from moving trough objects. Unity comes with a CharacterController component that takes care of a lot of the basics like collisions.

You might have heard at some point someone describe controls as fluid, which basically means that the movement is smooth and consistent (even when colliding.) I noticed that if the avatar moved alongside a wall at an angle the velocity slowed down considerably. Ray casting might seem to provide a logical solution, but after some experimentation I found the simplest and most effective solution to this problem by observing the resulting velocity after collision: When the avatar collided, it moved slightly in the direction I wanted. So by normalizing that movement and multiplying it by our desired speed (magnitude), we get smooth movement while colliding every time.

This sounds good and all, but as I dwelled deeper into the theory of collisions — I became aware of something. Anyone who has worked with Unity and non-physics based character controllers before knows that collision detection with kinematic rigid bodies (including other CharacterController components) is a major problem. All moving objects, like say moving platforms, will go straight through and make the avatar skyrocket up into the air.

The problem is complicated even more when Unity doesn’t give us access to the appropriate methods and functions for fixing the problem. Sadly, OverlapSphere is the only function that with 100% certainty returns all intersecting colliders, which is not exactly ideal for non-spherical colliders like the CharacterController component’s capsule style collider. The solution is to handle collisions manually using a series of OverlapSpheres.

Unity 5 provides four different collider types; SphereCollider, BoxCollider, CapsuleCollider and MeshCollider. There are two different approaches for handling collisions; either to correct overlapping colliders and forcing them outside eachother or trying to prevent the overlap in the first place. The most scalable and reliable method is the former method of correcting overlapping colliders as it will ensure that the avatar is always outside the collider no matter how it was moved there.

To correct overlapping colliders we need to find the closest point on the collider and place the avatar next to it using OverlapSphere. Each collider primitive has its own mathematical solution for finding the closest point. The only tricky one is the MeshCollider. Here we need to find the closest triangle on the mesh and the closest point on that triangle. Instead of reinventing the wheel, I used a BSPTree script for this purpose, which is attached to any MeshCollider when the avatar collides with it.

The result speaks for itself — it now just works.

So without further ado…

Here’s a demo scene with the first final version of my character controller.

Play the game here <

With a 50mb file size the browser should load it relatively quickly, but please give it time.

There is also a secret hidden in there somewhere. It’s a real challenge getting there though.

Additional design

There are a lot of additional features like gravity, jumping, sliding in slopes, keeping track of velocity, and detecting ground and wall normals. Support for moving and rotating platforms is a big one, but it all comes together to make a solid, scalable character controller.

Conclusion

A controller is the heart of a character and is vitaly important for the success of a game. It will take longer than you usually expect, but a lot of time, thinking and effort goes into a good character controller.

I hope this article gives you a little insight into my work process and how I think, but this is of course only a small excerpt of the whole process. If you have any questions or just want to get in touch, please do! You can reach me on Twitter or LinkedIn.

If you liked this, please give it a ♥ so others can enjoy it too

--

--

Marius Holstad
MariusGames

I make games so we can better understand our feelings. I respectfully urge you who study the mystery, don't pass your days and nights in vain. GAME ON 💪