Making Procedural Animation Friendlier for Humans

Conrad Barski
5 min readFeb 17, 2018

--

An experimental tool for animating objects with projectile physics, which you can play with here

I’ve been thinking a lot about animation and have recently been wondering about how animation tools could be engineered to make the process easier. And, in my free time, I’ve been working on an animation framework (in clojurescript, using qlkit) inspired by the principles of functional programming — Watch our intro video at forwardblockchain.com and you can see some early animations I have created with my tools.

I’ll be writing more in the future about how we might approach animation differently with the goal of exploring how animation can be made easier and also make it more natural to introduce precision into the animation process: I believe there would be a lot of benefit in creating new tools that can simplify the creation of animated scientific illustrations.

In this short post I want to share one extremely simple experiment that you can play around with and which is one of many experiments I’ve built to help me think about the process of animation.

Thinking In Snapshots

The first step most animators will take when creating a new animation is to generate storyboards, like this:

A storyboard for The Radio Adventures of Dr. Floyd episode #408 — via Wikipedia

Thinking in snapshots about an animation, as we see embodied in such storyboards, is a very natural way for humans to approach animation. And, because of this, most modern animation apps try to offer tools to translate storyboards into animation through keyframing, a technique that wikipedia describes as follows:

In software packages that support animation there are many parameters that can be changed for any one object. One example of such an object is a light. Lights have many parameters including light intensity, beam size, light color, and the texture cast by the light. Supposing that an animator wants the beam size of the light to change smoothly from one value to another within a predefined period of time, that could be achieved by using key frames. At the start of the animation, a beam size value is set. Another value is set for the end of the animation. Thus, the software program automatically interpolates the two values, creating a smooth transition.

On the one hand, keyframes are great in that they complement the human conception of “snapshots” in time. On the other hand, keyframes (as they are usually implemented in modern animation tools) lack two properties that, in my view, still makes them a poor substitute for human notions of motion and time.

Problem #1: Keyframes Break the Laws of Physics

Suppose a human creates a storyboard of a basketball game with images of a ball passed between players:

It is natural for a human creating a story to specify that the ball has to be passed between two players — But the animator (and their animation tools) are going to have to figure out the exact logistics whereby the ball moves from one player to the other. Typically, the animator will do this by specifying an animation path for the ball, along with keyframes and easing. If the animator wants the motion to appear natural, they will need to put a lot of effort into how the ball behaves at the end of the throwing arc, so that it appears to have a proper “weight” to it. In essence, the animator needs to painstakingly play the role of a “physics engine” and use complex keyframes to make the motion appear “real”.

The other alternative would be for the animator to eschew keyframes and rely directly on a physics engine to simulate the motion of the ball — However, it is extremely difficult, using a physics engine, to get an object to move in a predictable way… unfortunately, raw physics simulation can be very difficult to meld with typical human goals of storytelling, which requires objects to move in a predictable way from A to B.

Problem #2: Keyframes Are Not Composable

In computer programming, we call tools “composable” if they can be connected together in many different ways to produce interesting new results, and also can be disconnected at a future time without breaking individual components (i.e. they behave like “lego blocks”)

Keyframes, however, do not usually have this behavior. For instance, if our animator has created keyframes for the motion of the basketball, it will often be very difficult for the animator to make invasive changes to the animation after the fact- For instance, if the director says “replace the basketball with a bowling ball” or “make the players stand twice as far apart” the animator would likely need to throw away their keyframe animation and create a new one from scratch to create a new animation that remains convincing.

The lack of “composability” for keyframe animation also means that this technique forces the creation of a lot of “incidental mutable state” which is frowned upon by programmers who favor the functional programming style.

One Approach to a Solution: Replacing Keyframes with “Future Handles

In the simple experiment I am sharing with this post, I’ve implemented one possible solution to the problems described above — I’ve replaced the keyframes with something I’ll call future handles. Whereas a keyframe always exists at a specific point in time, future handles happen at any point in the future. The animation engine will guarantee that each of the handles will be reached — However, it is up to the animation tool (and indirectly, the animator) to decide what the rules are whereby the object reaches this future position, and how long it will take to get there.

In this way, it is possible to introduce realistic physical motion directly into an animation, while at the same time creating predictable destinations for objects. Essentially what my animation engine does is that it invents appropriate walls and floors for the ball to bounce off of in order to meet the conditions imposed by these handles, while maintaining correct physical properties of the ball whenever possible. If, perchance, the engine guesses a wrong location for a wall, the animator can fix this by simply interposing an additional handle to give more guidance to the engine. As we’ve discussed, predictable destinations for objects makes it possible for animators to use their natural intuitions around “mental snapshots” and can potentially reduce the cognitive load on the animator as they attempt to tell a story in an animation.

It is important however to realize that issues around composability, precision, and “human-friendliness” in animation tools is a complex problem and in this post we have looked at only one tiny piece of a much larger set of solutions that would be necessary in resolving these issues on a larger scale. I hope to share many more ideas around the animation framework that I’m developing in the future- Stay tuned!

Conrad Barski, CEO of ForwardBlockchain
twitter: @lisperati

Source code for this demo: https://github.com/drcode/bounce

--

--