Exploring the Naumachia of Parc Monceau with Minsar : Part 3

How to optimize 3D models for real time applications

Maëlys Jusseaux
9 min readMar 28, 2019

This is the third part of a four-part article presenting the technical and conceptual work-through of the experience:

What is optimization?

At this stage, I could have already imported my objects into Minsar. However due to my experience in 3D creation and real time applications, I knew that I needed to go through a very important step: optimization. So what do I mean by “optimization”?

Well this is actually the basics of realtime 3D and applications. A 3D object is composed of several triangles, or polygons. The more complex the object is, the more its topology, that is to say the density of the triangles / polygons composing it, will be complex. The more the topology is complex, the more time it takes the device (computer, iPad, Hololens…) to calculate it in order to render it according to the environment it is put in. Indeed, real time applications require an object to be recalculated every single frame in order to give that impression of being in real time (interactions with the light, movements etc…).

This image shows the units that compose a 3D object. Here they are represented as quads, but in reality a quad is formed by two triangles. That is what creates a “face”. The difference between the first cube and the second is that the second cube has been subdivided : we took the fist cube and we split each of its faces in four smaller faces. Subdivision is another basic concept of 3D creation, and you typically subdivide an object to get more detail. But the more subdivisions there are, the more faces, and the more faces there are, the more triangles (or polygons) to calculate in the end for the render engine. © CG Tuts

Now imagine asking your device to calculate 1 million triangles every frame (a frame lasting much less than a second). Your device will certainly either crash or run very slowly. That is why the word “optimization” is (or at least should be) in every game designer, developer, 3D graphist’s mind. Optimizing an asset consists in reducing its impact on the performances of the rendering device. It can be done in several ways, such as reducing the size of the textures, or decimate its topology, that is to say reduce the density of the triangles which compose it.

As a matter of fact, it is in our intention to develop a functionality in Minsar which will allow the users to automatically decimate their models if they are too heavy for the experience. Another functionality will allow for an automatic reduction of too high-resolution textures according to the platform and the device used.

Triangle decimation

In my Naumachia experience, I wanted to import at least two ships carrying about twenty men each, all armed and ready to fight. After a quick calculation of the totality of the triangles that induced, it became obvious that I had to manipulate the models I had downloaded in order to optimize them. Furthermore, by thinking about the global render of my experience in Parc Monceau, I reckoned the ships would be rather small in the visitor’s point of view: thus there was no need for particular details.

So I started by drastically decimate the Roman shield from 8000 to less than 500 triangles. There again, from up close the shield looked very simple and much less detailed, but I kept in mind that it would be quite small in the final experience.

On the left, the “high-poly” model, and on the right, the result of the decimation tool in Blender. It must be noted that this result is not very good in terms of topology: some faces are distorted, the general feeling is somewhat disordered. This generally comes from the fact of using an automatic process. Generally, it is always best to do things manually or with more advanced tools such as Topo Gun. There are three main cases, in 3D production, for which automatic decimation should NOT be used. First, if the object must be animated after. Second, if you want to create the texture manually. Third, if someone else must intervene and might need to modify the object. In my case, I was in neither these three cases, that is why I decided to use this solution.

The optimization need is also why I had to retopologize Anthony Yaez’s Spartan helmet, because it was over 5000 triangles.

Made in Blender 2.79. For the Spartan helmet, the automatic decimation gave way too bad results, completely breaking the general shape. In this case, I had little choice but remake the topology manually. This picture shows you the process: in grey you have the high-poly model. In orange, the topology I progressively built over the original. It is really like “wrapping” the original object in a rough, primary shape, and then progressively subdivide the shape to fit the original as best as possible, but without too much detail (a special thanks here to Cedric Plessiet, professor at Paris 8 University, who took considerable time teaching me this ❤)

UV-mapping and unwrapping

Another means to optimize a model is to combine all its separate parts into one single object. On some objects, 3D creators may choose to apply a separate texture to particular parts, because it gives the opportunity to provide that part with much more detail in the texture.

Indeed, to texture a 3D object, you need to UV Unwrap it, that is to say you need to unfold it in your 3D software, exactly like you would unfold a dress or a pair of trousers to define their seams and all. Well UV-Unwrapping an object enables the render engines to know where they should render this or that part of the texture, because you give them a map, which is called the UV-map.

On this image, you can see that the UV Map is really a 2D representation of the 3D cube. The red part of the cube has coordinates in the 3D space (X, Y and Z), and the UV Map translates these coordinates into a 2D space. The dimensions of that UV space are U and V (they called them like that in order to avoid any confusions with the X, Y and Z dimensions of the 3D space). That translation of 3D coordinates onto a 2D space results in the creation of a map.

The problem is, a UV-map has a limited amount of space. Thus when you unwrap an entire human on one single UV-map, each part of the object has to share a certain amount of pixels, which is induced by the texture resolution itself. As you perhaps now, an image is a bunch of pixels. A pixel is to an image what a triangle is to a 3d object. Well the “quality” of this image, the level of detail, is called the resolution. It is induced by the size of the pixels composing the image. The more the pixels are small, the more numerous they are on the image, and the more detail you will have. The more the pixels are large, the less space they leave each other on the image, and the less detailed the image will be.

On the left, the pixels are much larger than on the right. If you watch the circles, you can see that the one on the left looks rather blurred compared to the neat circle on the right. © The Ortho Cosmos.

Let’s say your texture has a resolution of 2048 pixels by 2048 pixels : all the parts of your object will have to share 2048 pixels wide and 2048 pixels high. Instead, if you decide to unwrap your human’s head separately, as an individual object, and that the face texture is also 2048 x 2048, the face will be much more detailed in this case because the face parts are the only ones to share the texture, they don’t have to share it with the rest of the body. That is also why in some 3D animation movies, the characters’ heads are unwrapped separately because it gives them much much more detail.

About baking

The basics of baking

In our case, first we are in a real time application. Second, we are creating a schematic experience. Third, the 3D elements won’t be seen from up close. These three reasons explain why I thought I should combine as many objects as possible together. I started by the ship. In Blender, I took all the parts of the ship, duplicated them and merged them together to create one single object. At this stage, I had to re-unwrap the model to give it a new UV map with all the parts well organized.

On the left-hand side, the ship with all the parts merged into an object. On the righ-hand side, the UV map. Again, generally, it is well better to unwrap a model manually, again because it gives you much more control on the final result. However, as I said I wanted to work as fast as possible, that’s why I used the Smart UV Project tool of Blender (which generally does a pretty good job by the way).

Then, I proceeded to an action which is also essential in the world of realtime 3D: baking the textures. So what does baking a texture mean? Well basically, it means asking the software (in my case Blender) to copy the texture information of one model and to paste it to a second model. There again, we can see the purpose of a UV map: the map of the first model will be transposed to the map of the second model, according to the new positions of the different parts. For instance, in model A, the information concerning the arm will be located in 0.5,0,3, whereas in model B, they will be in 1, 0.5, 4. Baking consists in taking the information of an element in a certain location of the map, and repeating it in another map where this element is located somewhere else. This is exactly what I did for my ship: I asked the software to take all the separate textures applied to the separate parts, and recombine them all into one, single texture. I did the same thing for the Roman and the Greek Warriors, which gave me, for each one, a single texture containing the body and the weapons.

Single texture for the Greek ship, baked from all the separate parts of the original model. Blender 2.80.

Normal maps

In the 3D creation world, baking textures is most often use for normals. What is a normal? A normal is basically the direction a triangle (or a polygon) is facing. Normals are responsible for the relief of your object, because it will have an impact on the way light interacts with the triangle, more precisely in which direction it will bounce. Indeed, details on a model are actually induced by the orientation of each triangle constituting the detail itself. That is what create shadows, and, consequently, volumes.

When you decimate a model, logically, you lose a good deal of detail. However, you can make sure that these original details will still be visible on your decimated model. How? By creating what is called a “Normal map”. It is exactly the same thing as the color texture we have baked and which will be applied on a set of UV’s, except that it won’t affect the color of the object, but its relief, and the way light will interact with it. The Normal map is a texture which will indicate to the renderer (Blender, or Minsar) how each triangle of the model is supposed to react with light.

Let’s take the planks on the deck of our ship. Between each plank, there is a physical cavity: the topology here is shaped such as there is a demarcation between the planks. That demarcation, that cavity, is made of triangles, as we have seen earlier. It is the orientation of the triangles (or polygons) that create the relief, the shape of that cavity.

Let’s assume that on the decimated model, the cavities have disappeared because the topology has been simplified, and the triangles at this location are now oriented flat. By asking the renderer to take into account the Normal map applied to the simplified object (Normal map which we will have baked from the more complex object), the renderer will understand that at this precise spot between two planks, the polygons are supposed to be oriented in such a way they create a cavity, and thus it should have a particular impact on the light. That is how on our decimated model, a cavity will appear, though it will be a pure illusion.

This is the normal map of the wooden planks of the ship. This particular color is induced by the fact that the image is generated in RGB, where each channel corresponds to an orientation. © Alint.

As it had different color textures, the ship had different normal maps according to the different parts of the ship. Indeed, that ship was most certainly designed to be seen from up close, that is why the creator put some particular attention in the normal maps. In my case, I felt that I didn’t need the normal maps, because the details would be invisible from afar. That is why I decided not to use any Normal map at all. However, I baked the roughness information in order to have the right amount of specular on the ship, which supposed to be wooden.

The particular case of glTF pipeline

I wanted my final object to be exported in .glTF, and that implied another specific manipulation. The glTF format is a format for 3D which aims at being universal, open source and free. Its intention is to provide an equal alternative to formats such as FBX or MAX, which are proprietary.

At Minsar, we believe in the democratization of creation, that’s why we have decided to support that format which aims at being used by anyone without necessarily having to pay huge sums to buy the proprietary software.

Yet creating a glTF object and exporting it in that format requires some specific manipulations, such as combining the information of Ambient Occlusion, Metallic and Roughness into one single texture, called ORM (more information on Khronos Group’s website and on the tutorial I wrote on the subject on Minsar User Guide). In the case of my ship, I baked the roughness map, and under Photoshop I created a white texture for Occlusion, and a black one for Metallic. Then I combined the three into an ORM.

In this third part, I explained the very basics for optimizing 3d models for realtime applications. Now, it is time to go and see the final result created in Minsar!

--

--

Maëlys Jusseaux

Cultural and artistic projects researcher on Minsar, I’m also a digital artist working on a PhD about immersive technologies applied to Cultural Heritage.