Assemble With Care was an 18 month project recently released on Apple Arcade. A meditative tactile game, in it you restore objects that have sentimental value to their owners, and explore their relationships to one another.
Now that the game is out we wanted to share some of the technical processes that went into bringing the game together, and share things we’ve learnt that might be useful for others. We used a combination of simple techniques to achieve what we hope is a distinctive artistic style. Hopefully this article offers some guidance, heads up on pitfalls and inspiration for trying to achieve anything similar.
Specifically I’ll be going through:
- The Art of Assemble With Care — What the artistic aims and goals were, and where some of our inspirations came from
- Bringing the Inanimate to Life — How the combination of some basic shader techniques and traditional artistic approaches can provide a subtle “hand painted” feel
- Player Focus — How we artistically de-emphasise aspects of a scene to guide a players attention throughout the experience
- Colour in Shadow — How we use shadowing and texturing to control colour carefully throughout a scene
- Shaders — Sprinkled throughout are small snippets of code as reference
A Bit of Context
After the release of Monument Valley 2 in June of 2017 we spent several months prototyping a whole range of ideas. We landed on the core of what would become Assemble in early 2018, and grew a small team around this to eventually release on the 19th September 2019.
Assemble was a huge team effort by many amazing individuals, and the work presented here is far from mine alone. In particular though, a shoutout to Max van der Merwe. As THE (in capitals) technical artist on Assemble, Max conceived and did the groundwork on many of the ideas and techniques presented here. Max has since left to prototype exciting stuff @thelinestudio.
The Art of Assemble With Care
In our previous projects we’ve often leant towards clean, simple shapes and minimal palettes. With Assemble, we wanted to retain the artistic values those games represented, but also find a new and distinct voice that allowed Assemble With Care to stand out.
We wanted Assemble to feel like it was alive, whilst giving the impression that a player was looking at a piece of art. To do this we drew inspiration from animation, and in particular animated films that are painted, such as ‘The Old Man and the Sea’ Dir. Alexander Petrov and ‘Loving Vincent’ Dir. Dorota Kobiela & Hugh Welchman.
This could have just meant applying painterly textures to our models and calling it a day, but this would have resulted in a very static looking game, so we experimented with something that felt more organic. We wanted to give the impression of a living painting that the player is surprised they can interact with, and bring a sense of texture, movement and presence to the world of Assemble.
Bringing the Inanimate to Life
The gameplay sections are naturally inanimate, focused on objects, parts and pieces rather than organics and life (except the occasional plant shadow). We adopt a fixed camera perspective with no motion throughout the experience. It is inherently, a completely static game. This made bringing it to life an interesting prospect and challenge.
Looking back towards animation for inspiration, there is a common phenomenon in most forms of traditional animation referred to as ‘boiling’. The small variations and imperfections that come from the process of creating each frame by hand, create a distinctive movement over multiple frames. It’s an effect that some animators try to eliminate and others embrace for its aesthetic charm. By imitating and utilising this effect we hope to give a sense of life and fluidity to our painted objects in a way that would be reminiscent of how they would have looked if they were actually painted in sequence — adding that sense of imperfection to the rendering.
To achieve this we implemented stepped animated vertex displacement. For every vert in our shader we move its position slightly every 1/5th of a second. Depending on how we author this vertex displacement we can give a hard model a feeling of a rougher shape that reads a little more as hand authored.
There are many many ways in which vertex displacement can be done, but getting the right feel for a given style takes a little more experimentation. For the comparison above I’ve applied three typical types of vertex displacement to our coffee cup (which doubles as our menu button) in real-time. These three examples have had their magnitudes ramped up to make it easier to see the qualitative difference between them.
Random Values. here we are just picking a random direction for each vertex to get pushed towards, this tends to result in a chaotic and sharp effect. The shape of the coffee cup isn’t well preserved and it feels more artificial than naturalistic. Additionally verts get pushed inside of each other causing visual artefacts.
Sine Wave. Using a sine function produces some nicer results. The sine function is sampled at a couple of different frequencies on each axis to get some variation of motion. Using a sine function based on vertex position means that locality of information is retained (verts close to one other will move in the same direction) which in turn means that the shape of the cup is retained a bit better. Overall an improvement on complete random directions, but it is easy to see the waves moving through the object. And a player picks up on this almost water like sensation even when the effect gets reduced down.
Simplex Noise. The final option uses simplex noise. Noise functions have been around as a tool for some time in games and have a ton of useful applicable cases, for example, if you’re ever doing camera shake this is a great GDC talk by Squirrel Eiserloh. Using Simplex Noise has the same advantage as Sine, that verts near to each other have similar values so the shape of the cup is better retained, but there’s also enough randomness in the pattern such that the eye can’t perceive the pattern moving through the cup, and when we reduce the amplitude as below, it starts to give the impression that the model has been shaped by hand with imperfections. We’ll later combine this with an animation technique, but this is the first stage.
So how do we implement this? There are two common ways to sample Simplex Noise. The first is algorithmically — there’s a good exploration of this here and a fairly well optimised hlsl library you can drop straight into your own shaders.
We actually used this library ourselves initially, however on mobile we were running into performance issues calling those functions multiple times per vertex, so we switched to sampling a texture instead. We generated ours (which you’re welcome to use) you can also find a few from google, or using a program like Substance Designer.
Here’s the shader function that generates this world space offset for a given vertex using our noise texture.
Note the use of tex2DLod — as this function is called in the vertex shader, the GPU doesn’t have generated fragment information for what lod level to use, we select the highest lod group with no offset.
Getting closer to that “Hand Drawn” feeling of animation
With the vertex displacement approach figured out, we want to use it to give some life to the objects in a way that feels painterly. To achieve this we don’t want constant motion as it makes everything feel like they are fluid, so we run the vertex displacement at a locked frame rate (For Assemble we run our model displacements and texture uv displacements all at 5fps). To do this in the shader is very simple. In the previous shader SimplexWobble snippet, we’re passing through “inTime”, and to calculate this at a locked framerate we use:
Here time gets clamped to every 0.2 increment of a second, and gives us our 5fps feel. This stepped vertex wobble technique is also applied on shadow casting objects on the background to get a nice sense of the environment and space that you’re assembling these parts in — note that the displacement for these shadow objects is only done in the shadow caster pass, so there is no fragment rendering of the meshes themselves (hence the invisibility in the scene view below).
In the shader snippet I used Time.y to ensure we have that feeling of boiling over time. However Time.y is the number of seconds since the level loaded, and so this number will get continuously larger the longer the game is running. To the point that it starts having floating point precision issues on GPUs and can produce severe glitching.
Here I left an older version of Assemble running overnight on an iPad and coming back the next day you can see severe pixelation issues.
If you are using Time.y in your shaders you’ll want to consider eventually replacing it with something stable. For Assemble we actually switched to having a script calculate the stepped time looping every second via Time.deltaTime and passing that into our shaders with Shader.SetGlobalFloat
Applying a Brush Stroke
Another approach we took to emphasise the painterly feel was applying a brush stroke texture to the models at the same stepped frame rate as the vertex wobble. This was reasonably simple to implement and gave some pleasing results.
With this metallic brass pan, you can see this brushed texturing on the surface coming through. This gives it the impression of it being formed of painted strokes in motion, which also helps it stylistically match with the shifting background below it.
We achieve this by lerping between our base diffuse texture and a shifting brush stroke texture to emphasise the animated boiling.
We use different textures crafted in photoshop to get slightly different looks and feels for each object, this combined with modifying the texture uv scale on the materials, gives a range of possible varied results that lends personality and differentiation to objects within each scene.
To achieve the sense of movement and boiling like the vertex wobble, in the fragment shader we shift the varying UVs by the Stepped Time value we calculated before. This means that the UV position is slightly different every 1/5th of a second and the texturing shifts in sync with the vertex displacement so that they boil together.
In the shader this is really simple, we add steppedTime calculated the same way as previously shown to the UVs passed into the brush texture sampler. We then lerp between this and the base diffuse colour to obtain our final result.
It was important that the player could make a visual and mechanical distinction between the part or object “In Focus” (i.e. being manipulated) and the pieces that remained on the ground.
We played around with a couple of ways to achieve this, for example using depth of field to blur as things were closer to the floor, but as well as introducing a post processing performance cost, it reduced visual clarity of the parts you are not currently working with. We wanted to retain this detail to reduce any friction for the player. So we took another approach, using a combination of desaturation, adjusting black levels and tone mapping the parts on the ground.
The approach is subtle but affective, here’s a comparison showing these three techniques in combination.
This knocks back the visual priority of the parts left on the floor in the top half and give a better sense of distance, space and separation, all while blending towards a tone that fits the mood, light and choice of ground texture present in the scene. This has the side affect of the record player case at the bottom of the screen being the most saturated object, and signals that it is the part you are currently manipulating. It reinforces that you have lifted this part up off the ground. I think like our other effects, this is subtle and works on a slightly unconscious level for the player.
So to break out the various components of this technique.
Desaturation — Anything that’s on the floor we push back towards grayscale.
Minimum Black Level — Rather than desaturating towards actual black, we desaturate towards grays (restricting how close to #000000 colours can get) this allows us to visually match the parts on the floor with the background tone, knocking back their visual priority.
Tone — We blend this all towards a colour ensuring that the objects more naturally fit not only with one another but also the ground texture. And in doing so reinforce the feeling of distance or space from the part you have picked up.
As for the shader code, we are running this in the fragment shader after the previous steps have given a brushed textured diffuse colour:
Our “RemapLuminanceLevels” function is remapping the colour values from one range (white to black) to another (white to gray), we re-map a floating point number like this with the following hlsl.
Shadows And Colour
The last technique to cover is another simple one that allows a fair bit of artistic license. To make realtime shadowing more interesting, if an object is covered by shadow we don’t do the standard rendering of just making these shadowed areas darker, we actually switch to a different (unlit) texture instead. Every object takes two sets of textures, the main texture, and the shadow texture. When calculating the colour in the fragment shader, we chose which texture to sample from based on whether it’s in shadow. You can see how this is used in our camera scene:
On the floor the shadows being cast onto this surface have this striking pink purple gradient, where the light strikes this surface, the tones are lighter, peachy and doesn’t gradient down the frame. Here are the shadow texture and light texture we’re using to achieve this.
And coming to the frag shader, we calculate whether or not any fragment is in shadow, and if sample from either the light or shadow texture depending on this when calculating its colour. Unlike most games we also don’t attenuate shadows over distance either, so shadows are either on or off, and we apply shadow strength as an external variable (_ShadowStrength) depending on how we want it to look visually.
Again there’s nothing very complex here, but it allows the artists to carefully author the artistic look of a scene as they can control how colouring and texturing appears when shadows are cast. I’ve used the floor as a clear example but we also use it for the parts as they are often self-shadowed.
Pulling it all together
Here you can see the techniques that we’ve gone through in this article, being combined step by step, producing the final visual effect that you can see while playing Assemble With Care. While each of these steps individually are relatively simple, the combination and balancing of each one against one another ultimately led us to a painterly art style that we were really happy with.
Hopefully this breakdown was useful for anyone interested in similar techniques, or just curious about what went into ours. We’re hoping to write a few more technical articles on different aspects of Assemble. If there’s anything you’re particularly curious about please reach and let us know in the comments.
Thanks for reading!