Kekubian Assassin Technical Insight

Resn
Resn
Published in
11 min readDec 11, 2018

An article written by Jesper Vos and Jack KMkota0 for Resn.

Kekubian Assassin Technical Insight

On August 3rd, 2018, HBO aired the first episode of Random Acts of Flyness — a new late-night series from artist Terence Nance. HBO describes the show as, “a fluid, stream-of-consciousness examination of contemporary American life”.

Episode 2 of the series depicts the show’s main characters playing an arcade game called Kekubian Assassin. To promote the series, HBO commissioned Resn to create a version of Kekubian Assassin that could be played by anyone on a smartphone.

In this article, we will take you through some of the challenges we have overcome while making this mini-game.

Creating a World

Each episode of the series features interconnected vignettes that make it a unique mix of vérité documentary, musical performances, surrealist melodrama, and humorous animation. We wanted our game to feature a mix of styles too, so we blended photographic textures on buildings and low-poly objects for benches, yellow cabs, and other props. We encapsulated this all in a psychedelic, afrofuturist theme.

Street — Bar — Park — Shop

Our game is an infinite runner taking place in four environments in Brooklyn: Street, Bar, Park, and Store. All of the objects were sculpted in Cinema4D and put together in Blender. In order to create an infinite path with turns, we created different blocks for each environment — straight, left turn and right turn.

We opted for an infinite runner because this enabled us to make an endless world with a minimal amount of assets, reducing loading time and increasing render performance.

Environment blocks in Blender
Street Block — Straight

By adding metadata and using a specific scene hierarchy in Blender, we exported our environment blocks as GLTF files and imported them into Three.js. We then replaced materials and created instanced geometries of matching models to reduce draw calls. When the game is running, the blocks are positioned and rotated in order to be chained — creating an infinite path. Simultaneously a smoothed path, using Catmull-Rom spline, is drawn for the camera to follow.

The spherical ground, on top of contributing to the visual style, is a great way of giving the illusion of an endless world while reducing the amount of objects in the scene. Since objects that are far from the camera get covered by the ground, we never need to render more than two blocks at the same time. In order to create this type of setup, we need to project the blocks over the world sphere.

The ground — that includes turning roads, park paths, and indoor, tile floors — is rendered to an off canvas target using a top view orthographic camera. It is updated every frame to match the world blocks. The render target texture is used as the map of the sphere — which has its UVs strategically positioned on top.

The instanced world objects — buildings, trees, lamps, benches, etc — are projected over the sphere in real time. First, the position is calculated using a semicircle function. Then, the rotation is adjusted to match the direction of a vector going from the center of the sphere to the repositioned object (multiplying that direction quaternion to the original rotation).

Finally, to magnify the visual perception of depth, we add a falloff effect to some object materials. It works as a color overlay blended on the object texture with more intensity the further the object is from the camera.

Adding animated characters

In Kekubian Assassin you play as Najja, a young afro-american woman who has to make her way through Brooklyn. But it seems like she can’t do that without getting bothered by strangers passing by. The players' goal is to stay strong as long as possible by throwing shade at anyone trying to steal her joy.

People of different cultures will appear along the way and might shout unfriendly phrases at Najja. In order to fill our world with animated characters, we first had to sketch and model them.

tiny textures include pre-backed lighting
We added skeletons to the models with Mixamo
Animations combined as NLA tracks in Blender

Our workflow starts with creating the low poly models in Cinema4D and exporting them with tiny textures that include pre-backed lighting. Thanks to Mixamo we add skeletons to the models and using Blender we export one GLTF per character. Using Mixamo we were also able to add as many as 30 animations to one of the characters. Unfortunately, Mixamo only exports one animation at a time, so they need to be combined as NLA tracks in Blender and exported as a single GLTF [this tutorial was a life saver, and thanks to this fork — now merged — we could export all the animations into the same file].

As a result, we have:

  • 9 characters exported as skinned meshes without animations [~350kb each].
  • 1 character exported as skinned mesh and the 30 animations [8mb].

In order to share the animations between characters, we need to rename all the imported bones to match the ones defined in AnimationTracks, otherwise THREE.AnimationMixer won’t find the right nodes to animate. We also remove all the bone position tracks, since we only need the bone rotations — otherwise, the kiddos stretch would be as long as the adults!

Character animation data

At this point, we can trigger and mix animations in Three.js. In order to make the characters come to live, we need to trigger them at specific moments. Collaboration with game designers in this area was really important, so we created a simple declarative syntax to define what we called behaviours and patterns.

We define behaviour as sets of combinations of actions to be triggered a specific events — distance from camera or reaction to interaction. For example (see image for syntax):

  • when the character is really far from camera → it is hidden
  • when the character is at distance 0.2 from the camera → it falls to the path, then loops through an idle animation
  • when the character is at distance 0.1 from the camera → it attacks with an insult animation
  • when the player taps on the character → it does a hit animation and then looks down

In order to mix and match characters with behaviours and put them in the path, we defined patterns. A pattern defines the characters in a section of the path of the length of one block and is tagged with a difficulty (we have up to 5 difficulties). This is an example using the declarative syntax we specified for it:

Camera movement

The game is played in a first-person view. At first, we just had the camera move through the world in a straight forward fashion — following the smoothed path we generated. The position and perspective felt right, but it didn’t give a good impression that our hero was walking. We needed to add some subtle movement to the camera. Some x- and y-axis rotation mapped to a sinus, and an absolute sinus value moving the y position of the camera did the trick.

Because the game goes faster and faster over time we multiplied the step interval of the walk movement by the game speed. But we found that at a very high speed the interval became so small the camera didn’t actually move anymore, so we actually slowed down the step interval with an exponential curve. This means the walk movement will increase quickly at first and turn into a running movement later in the game.

Making them speak

Now that we could walk through an infinite world with all kinds of characters we needed to emphasize who was being annoying and who should be left alone. We visualised the concept of micro-aggression with speech bubbles. A simple JSON file stored 22 insults connected to specific character and animations. For example “Can you twerk?” will only be said by the white girl while she is twerking, but the same character might say “Come thru kween!” while she is performing another gesture.

The show and hide methods of the speech bubbles were then connected to the state of the characters. On friendly characters, the speech bubble would never show and on other characters it would only show once their ‘insult’ animation is triggered. Tapping a character without a speech bubble above its head, or not tapping an aggressive character will lose the player a life.

Scowls

Kekubian Assassin is not about violence, it is merely about micro-aggression. In the series, we see Najja shooting scowl’s with a bright green toy gun. We wanted to stay true to this concept and made a sprite animation of this same visual. To show the scowls we positioned a quad in front of every character’s mesh and played the animation when the character is being tapped by the player. To emphasize the character being struck we also always trigger the ‘hurt’ and ‘sad’ animations at this moment.

Boss level

The boss level in Kekubian Assassin features 3 different characters asking tricky questions that tie in with the series view on contemporary American life. The goal is to send the boss to the 4th dimension by answering 3 questions correctly. The first right answer opens a black hole behind the boss character that starts to pull on him/her and the following questions increase the power of it. Answering the third question correct gives the black hole enough power to completely devour the boss character and the whole galaxy around him.

flowmap UV distortion

We started working on a circular displacement for the background first. After researching some different approaches we settled on using a flowmap. This is simply a texture containing distorted UV coordinates which we can use in a fragment shader. To animate between the different states of the black hole we just had to interpolate between the original UVs and the distorted UVs from the flowmap.

To create the illusion of an endless spinning universe we duplicated the background texture with a 180° offset and constantly crossfade between the two while interpolating between the original and flowmap UVs every sequence.

To make the boss level more dynamic we made the character an interactive model. To be able to rotate the head individually from the body we needed these parts to be separate meshes, but because the characters we made before all consist of a single mesh we had to make separate exports for the boss characters. We also optimised these models by cutting geometry from the lower legs since these would never be visible anyway… and we gave him a hat.

Now that we have an interactive 3d model in our game we can’t just apply the same UV distortion as we did to the background — we need a single texture for that. Luckily we can convert any scene to a texture by using a framebuffer (RenderTarget in three.js). We applied the same UV distortion to this texture as we did to the background, only instead of endlessly looping a rotation we just added some simple sinus displacement.

UI and transitions

UI layer system

On top of all this, an interface was designed in an afro-futuristic theme. We wanted to transition between the world and the UI by masking the views with a spinning pyramid. For this reason, we were forced to build all UI views in WebGL. Because of our highly ornamental design, we used textures for every UI element which we combined into an atlas of 2048x2048px so WebGL could process it.

For animated UI elements, we generated sprite sheets. To save data we chopped some symmetrical ornaments in half to duplicate and mirror them in GL on runtime, forming the full shape again.

Now that we had everything available in WebGL we rendered the UI view we want to transition to a render target. In a shader, we merged this texture output with the texture of a spinning pyramid — rendered in another render target. We could now mix the colors and alpha of the UI texture with the pyramids colors, creating a mask.

using 3D geometry as a mask

Most views required the pyramid to translate vertically. Very quickly we noticed that the perspective camera didn’t allow us to move the geometry without losing the front-view on it. To get around this issue we animated the UVs of the render target texture instead.

Audio

The last thing to complete the experience was to add sound to the game. We layed out the audio in 3 layers: background music, environment audio, and character effects. For the latter, we used a similar system as we did for the character animations — one JSON file to connect audio files to characters and their gestures. To keep audio in sync with the game we also connected this to the global game speed.

To make it a little more dynamic we also connected the volume of the sound effects for every character to their distance from the camera. Characters close to the camera now make a louder noise than characters far away, just like in real life.

Conclusion

As our first 3D game with animated characters, making Kekubian Assassin came with a big learning curve for us. The team embraced these challenges and got motivated to get everything working in time. With everything we have learned from this project we are ready and excited to make more games and 3D experiences.

An article written by Jesper Vos and Jack KMkota0 for Resn.

--

--