Applied Matrix Maths for Complex Locomotion Scenarios in Planet Coaster
During the development of Planet Coaster, my very talented colleague Owen Mc Carthy developed a crowd system that allowed our park to be filled with thousands of intelligent agents that navigate from goal to goal, playing all manner of flavourful animations and interactions. In this post I’d like to focus on those animations and interactions, and more precisely how some of the technical aspects of the system spilled onto the laps of the animation team.
Each member of the crowd is effectively a particle that flows from point A to point B depending on where the AI decides it would like to go. If the guest wants to stop to sit on a bench, the guest needs to leave this particle flow, complete its interaction and then return to the crowd seamlessly, without any glitching or intersection.
While a crowd member is in the main crowd system, their position is controlled entirely by the particle to which they are attached. Any variation in speed or position is driven by the system, with their animations authored on the spot along with some precautions to ensure the speed of the locomotion in the animation doesn’t fall out of sync with the speed of the particle (which would cause a lot of sliding).
When a guest approaches a bench, they’re going to need to play a ‘sit-down’ animation, which will take the guest from a set distance away from the bench all the way up to sit itself. You couldn’t animate this in one linear movement because there are a lot of delicate speed changes, a turnaround in the guests’ orientation and a period of settling on to the bench when they’ve stopped. This level of control can’t be achieved while the guest is still acting like a flowing particle.
The crowd animation system was simplified to improve performance, so for this level of precise movement we needed to be able to hand control of the interaction over to our animators. It was going to take some math fiddling to bridge the gap between code and art.
We can’t detach the guest from its particle because all of its physics information and bounding box etc. is tied to that position so we needed to author animation that moves relative to the particle’s movement, so that when the movement of the particle is applied to the animation the resulting movement looks the same as if we authored it.
I’m fully aware this might not be making any sense, but I’ll try my hardest to explain.
At the point a guest decides they wants to sit on a bench, the guest is going to set its goal to a position in the park exactly X meters away from said bench. Once this goal is reached, the guest’s particle will cease to ‘flow’ and begin to translate in a straight, linear interpolation from the position it had in the crowd to the position it will arrive at on the bench, over the duration of the ‘transition to sitting down’ animation.
To ensure the animation stays looking as we authored it, we’re going to need to represent the guest’s particle behaviour as an object in Maya and counter-animate the “transition to sitting down” animation on every frame, relative to the movement of the guest’s particle position.
Why counter-animate it? How does this differ from a ‘conventional’ animation system?
One of the limitations of the crowd system we employed in Planet Coaster is that we can’t use bone space blending, where a bone’s orientation is described by using a SLERP between the bone orientation in bone space relative to their parent between the two animations. This is because unlike our regular animation system, the bones in the guests in our crowd have no concept of hierarchy. Rather than a each bone rotating around its parent as you’d expect in a regular FK skeleton, every bone just exists on a flat hierarchy in model space, which means we only have access to model-space linear blends when we want to change from one animation to the next.
To imagine this, think about an animation where a character is standing with their arms by their sides, and then we blend to an animation where they’re holding their arms up above their head over a duration long enough to see the blend. Normally you’d expect to see the upper arm bones rotating from pointing down to pointing up, with the upper arms staying in proportion throughout the blend (spherical linear interpolation).
However in this system, in the absence of a hierarchy, during the blend the shoulder, elbow and hand bones would translate linearly from their first position to the end position (linear interpolation) and 50 percent of the way through the blend the arms would squash together.
[As a side note, this blending artefact caused us problems on the coasters when the guests threw their hands up and got T-Rex arms, but we solved this particular case with hand-authored transition animations].
What we’re aiming to do with the counter-animation is to create the illusion of having bone space blending with a cross-fade.
To achieve this, first we need to satisfy the following conditions:
- We need to be able to blend from any frame of any of the locomotion animations into the start of the ‘transition to sitting down’ animation. This means the sitting down animation needs to be exported on the spot, and not move away from the root of the scene.
- We need the end pose of the ‘transition to sitting down’ animation to match the start pose of the actual sitting animation.
- We need the end pose of the sitting down animation to match the start pose of the sitting down animation, so that it loops cleanly.
So for the transition animation we know that we want the person to be facing ‘forward’ at the start and end of the animation. But the authored animation has the character turn 180 degrees over the course of the animation.
The ‘walking’ model will have its forward vector pointing towards the bench.
The ‘sitting on bench’ model will have a transform with a forward vector that is pointing from the centre of the bench away from the bench.
So to keep things on the spot (and thus permit a clean blend) the code will need to control and rotate the transforms over the course of the animation, and this is where the challenge of accurately counter-animating the authored animation to be held in-place comes in.
We are not going to do this by hand. The animators would never forgive me and the results would be inaccurate, sliding all over the place like a giraffe on ice skates. We’re going to need to get our maths on and that means getting involved with Transformation Matrices.
Enter the Transformation Matrix
Lets step back from the problem for a moment and talk about the maths at the core of this solution. That will be fun for everyone.
If you’ve ever had to store information about the position and rotation of an object in Maya (or any other 3D package), chances are you’ve come across the Transformation Matrix as a concept.
If you’re unfamiliar with Matrices, we’re just talking about presenting the position and rotation of objects as a 4 x 4 grid of values that can be transformed at will and then restored into the good old fashioned Translate X, Y, Z and Rotate X, Y, Z that you know and love.
In Python you’d recognise a Matrix in the following form:
TransformationMatrix([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
This is the matrix for an object located at 0, 0, 0 position and 0, 0, 0 rotation.
Algebraically this is presented like:
What’s rxyz,uxyz, fxyz all about?
Okay, so in a transformation matrix, rotation isn’t presented in Euler angles, but rather as three vectors, Right (r), Up (u) and Forward (f).
You see, rotation doesn’t really exist. It’s just two positions in space and rotation is the angle that maps the trajectory from one to the other.
I find this easiest to grasp when I imagine it in 2D. So if you’ve got a piece of graph paper and you draw a dot at (0, 0), then draw another dot 1 unit up and 1 unit across (1, 1) and draw a line between them, the resulting “angle” is 45 degrees. Now imagine this in a 3D space and you’ve pretty much understood how rotation works in 3D software.
Translation is represented as t and that’s just good old position, same as it ever was, just x, y, z.
Unlike the normal position and Euler rotation values that you use for animating, a point defined in a transformation matrix can be transformed, rotated, inverted or mirrored in all matter of ways, but at the end of it you’ll still produce values that can go back into Maya.
For instance, for this script we’re going to need to do a lot of querying an object’s position, relative to another object’s position. To achieve this with matrices, you simply have to multiply the matrix of the first object by the inverse matrix of the second object and the resulting matrix is the position of the first object, in the space of the second. It’s a bit like parenting an object to another, and getting the child objects’ local position relative to it’s parent.
Please stop talking. How’s this tool actually going to work?
Let’s break this down into logical steps, as if we were doing it manually.
In our scene we’ve got our rigged character, walking from origin, over to the bench and sitting down. Over the course of 50 frames of animation, he walks forward 100 cm and rotates 180 degrees in the Y axis. This movement is simplified, made linear and represented as the object in the scene labelled ‘SRB’ (synthetic root bone). This SRB represents the exact movement that the guest particle is going to follow in code.
The rigged character has a ‘moveAll’ controller which is the parent of the control rig in its entirety, IK objects included.
On every frame of this animation we need to work out the position of our character’s moveAll node, relative to the SRB object.
So the code is going to go something like this:
On every frame of the animation…
- Get the matrix of the moveAll controller (M_all)
- Get the matrix of the SRB (M_srb)
…and then the maths we spoke of earlier…
- Collect the offset between the MoveAll and SRB controllers
And then in a second daring sweep of the timeline:
- Move the moveAll node to the position we stored in the first stage
The result should be an animation that is held in place, writhing about in a fashion that directly opposes the movement of the SRB node.
It’s time to turn this into something that we can run.
Turning this into Python
First up I’m going to write some convenience functions for getting matrices and applying them back to objects in the scene. Luckily, I had most of these prepared already from other tools and I just needed to tidy them up for use here.
We’re using PyMel’s TransformationMatrix class to store the matrix, as Python doesn’t have one out of the box. Normally you can multiply TransformationMatrix objects with a simple a * b command, but I’ve found this to be a little slow in the past and I’d already written matmult() which makes use of Python’s list comprehension optimisations which I’ve found quite a bit faster overall. It doesn’t make a huge difference in this tool but in large arrays or long loops every little helps.
The RotationOrderConvert dictionary at the top just helps to translate between the different ways that rotation order can be returned to the user.
setTransformRelative will move an object to the supplied matrix. It transforms the rotation information from the matrix from vectors, into radians and then into Euler angles, as well as compensating for any discrepancies in rotation order or any offset pivot points.
The Main Function
Now that we’ve got those all set up, our main loops should be quite concise.
In Step 1 of the script we’re going to build a list of all the new positions we’ve worked out and in Step 2 we’re going to apply them in the scene.
I didn’t want this change to be destructive, so in the tool that I created a new animation layer is generated at run time and all of the new keys are applied to that layer. This allows you to undo everything the script has done by either muting the layer, or just deleting it altogether.
The GIF above shows what the animation looks like after the script has run. It looks very strange indeed and frankly, quite broken. But do not be alarmed dear reader, everything is as it should be. The animation should look odd at this point, because this is exactly as the animation presents itself before the translation of the SRB node is applied to it. But we’d better check to make sure…
Checking the Result Before we Export to Game
I needed a way to make sure that what I’d made wasn’t just an automated animation-wrecker. So I decided to simulate exactly what’s going to happen in game with a simple bit of Maya fiddling. I created an instance of our guests’ geometry and parented it to the SRB control, forcing it to inherit its movement and reproduce what’s going to happen in-game.
The result is shown below, our teen chap casually taking a pew without any sliding or glitching. This is very encouraging. The blue spectre behind him is the version of the animation that we’re going to export.
Final Result In-Game
Success! The final asset worked as expected in the game and the system proved effective enough to use in a few other scenarios such as visiting a kiosk, getting up after a coaster crash and turning 180 degrees on the spot.
This was a fun problem to solve and a great example of how even the most trivial of moment in a video game development can present some interesting challenges.
Many thanks to Owen McCarthy, James Chilcott and Matt Simper for their tuition, feedback and patience.