A deep Look into Kouji’s eyes

Spooklight Studio
6 min readMar 16, 2018

--

This article is the first one of a future series of technical articles aimed to our fellow developers.

First of all, if you don’t know Kouji, don’t hesitate to have a look now on kouji’s website.

They could not seem so, but Koujis are moved by many different mechanics, due to their heavy specifications. Since they have to look like you (or everybody else, of course), they have to be highly customizable and support a nearly infinite number of combinations. They also have to move like cartoon characters while always being cute.

And all of that must run smoothly alongside video recording, sound processing and augmented reality… on mobile devices.

So, every aspect of the Kouji became a technical challenge: how can we dig a mouth with complex changing shapes without using heavy topology? How to offer to our Kouji a very large combination of shapes, clothes and colors without ending with a 10Gb app?

Alongside those main questions went many other innocent problems we had to solve along the way.

This is one of these apparently innocent problem that we will study this time:

How to move Kouji’s pupils?

Let’s have look backward…

Due to many artistic and technical concerns that I won’t address here, we decided our Koujis to have faces with a 2D-Like aspect, as if their eyes and mouth were painted on them.

fig1. eyes preproduction

Actually they’re not really painted on their faces but on specific geometry hovering the head. The facial expressions are managed with a combination of blendshapes and shaders. And we added the pupils on the top of it all.

As we wanted a constant pupil shape over a changing geometry, we chose the Unity projector technology. Basically, it’s just like a cinema projector using the eye geometry as a screen to project the pupil texture on.
Moreover, it was a simple way to reproduce exactly the same result from our native 3d software (Blender) using the exact same trick. All that we had to do in the end was to stick the projectors on bones. By doing this, the animation of pupils was very easy to export and recover.

fig2. Projectors’ bones.

And all worked perfectly. Almost…

An appointment with the oculist

This solution was included in the first versions of Kouji. But as soon as we included ARKit tracking solutions, we noticed clipping range issues that rarely render the projectors ineffective and cause the pupils to disappear.

Having plenty of other priorities to address, we decided to postpone finding a solution to that problem.

But an unexpected update (iOS 11) changed our plans. Our first tests demonstrated that the way our shaders used the projectors would not work anymore! So we finally had to find another solution… right away!

Our own projection solution

We then had to find out our own projection solution.

By luck, our eyes’ geometries were very similar in flatness and orientation to the object’s XZ plane. In other words, the XZ vertices position could also be used as their own UV coordinates.

fig3. Vertices coordinates as UVs coordinates

And, of course, by offsetting and scaling those values, we could obtain most suitable UVs for one eye only. For instance with these settings, the following UVs coordinates can be obtained:

u = x * 2.0
v = z * 2.0–1.0

fig4. Remapped Coordinates: That’s much better!

By using the vertex coordinates as UVs, the texture aspect is not affected by the eye’s blendshapes anymore, as the UVs move in coordination with the vertices movement.
For instance, if the eyes close, the UVs coordinates of the vertices altered by the blendshape will move accordingly.

It’s as if the texture is pinned in the air: it’s the vertex position which indicates the right pixel of the texture to use. Exactly as a projector would do !

But this advantage is also an disadvantage: the skinning is also affecting the vertex position! As a result, the pupils stayed pinned in place as the Kouji’s head moves with the body. It’s as if you projector had to stay in place while the Kouji moves. The result is quite odd:

fig5. The exorcist

So, how can we achieve the best of both worlds?

Baking UVs as a texture

At this point, let’s summarize a bit:

  • What we need to keep is the alteration of the UVs by the blendshapes.
  • What we need to get rid of is the alteration of the UVs when the vertices are moved around by the character’s movements.

When we thought about it, the only necessary information was the transformation delta between the basic position and final blendshape position of each vertex. With those informations, we can correct the UVs alongside the blendshapes activation.

To obtain those informations, we baked the vertices positions in textures.

Using the remapped coordinates obtained earlier, we wrote a shader transposing linearly the X position into the red (R) channel and the Z position into the green (G) channel:

fig6. UVs stored as colors.

We then baked these data for each blendshape onto a constant circular UV layout and stored all the result in a 4x4 spritesheet:

fig7. Baked UVs spritesheet (The U axis is flipped)

Correcting the UVs

Now that we had every vertex position for each blendshape stored in a texture, the next step took place in Unity.

The idea was to write a shader correcting in real time the UVs thanks to that famous transformation delta we spoke about earlier.
To obtain this delta you only have to subtract the original “neutral” position data with the position stored for the active blendshape.

delta = BSPosition — originPosition

Obviously, you’ll have to weight the delta with the current strength of the blendshape, as it is not always fully active.

weightDelta = delta * BSWeight

To get the new position, you then have to add this corrective delta to the original position:

originPosition += delta * BSWeight

And you finally have to do this for each blendshape. In the end, this part of the shader looks like that:

Finally, let’s just offset the UVs to move the pupils. A simple lerp based on each projector bone’s position will be enought.

And that’s it !

Post-Scriptum-Mortem

Despite working well, this solution has two main problems:

  • The UVs precision is bound to the spritesheet color precision itself. Despite using a 16 bits uncompressed PNG baking (which is already quite heavy), the result is not as fine as it could be. But the problem only being visible from up close (which is not the case 99% of the time), it’s not a big deal.
  • Every new blendshape would have to come with a new baking and an updated spritesheet and shader.

If some further development has to be done to this Kouji’s feature, it would be interesting to upgrade the “Baking” part by storing informations more precisely, or maybe (who knows) store the vertex deltas directly into the shader.

See you soon!

François Carrobourg
Lead TD Artist @
Spooklight Studio

Many thanks to Julien Balestrieri who worked with me on the technical part of the Koujis. This peculiar feature was an intense collaboration between us.

--

--

Spooklight Studio

Spooklight Studio is a Geneva-based company focusing on innovative augmented reality's applications. #AR #AugmentedReality #IndieDev @Storyfab_App creator