Developers’ Diaries. Issue #2

Today we will talk about the current development in the field of mesh processing — texture mapping.

One of our goals is to learn how to convert any mesh into an AR/VR/ 3D-printing compatible model. The user can load any model — either created by a 3D artist or acquired by a 3D scanning software. Many of them have a very fragmented texture. This produces artifacts when rendering a model, reduces the quality of mesh processing, drastically increases the size of the file. To avoid these problems, we develop a universal algorithm for creating a high quality mesh projection on a plane.

We face a number of difficulties in solving this problem.

1) When unfolding a mesh it is necessary to find a balance between the distortion of the resulting patches and the number of texture seams, as they both cause visual artifacts. Because of the different nature of these problems (the former is continuous and the latter is discrete), it is hard to combine them into a single optimization problem.

2) Users can import very large meshes — this is especially true for 3D scanned models. This results in a tremendous number of parameters and therefore a huge time and memory computational cost. The need for algorithms optimization becomes pressing.

3) The algorithm should cope with any mesh topology, including non-manifold geometry.

4) It is impossible to avoid texture seams entirely (except in a very specific type of cases). Therefore we want them to appear pleasant to the human eye (i.e. to be in hidden areas and along the features of smooth surfaces).

The universality of software tools is the main challenge in our ambitious endeavor. Surfaces of a scanned model and a manually created mesh are considerably different. Scanned meshes are heavier, smoother and more complex. They often contain noises that complicate their processing. Models created by artists are simpler and lighter, but they often are much more fragile — almost every detail is extremely significant in the overall appearance of the model. Additionally, the creation of a mesh often occurs in two stages: a real object is scanned first, and then it undergoes software or artistic post-processing. Such “mixed” models can inherit the features of both types of meshes.

These features will be available in the Easy 3D Scan and you can see how they already work in the internal software version.

This will make a significant leap in the content production and help sell the right content in the marketplace of the Cappasity platform.

We would be interested in receiving your feedback on the relevant material in our Telegram channel — https://t.me/artoken ! Please voice your opinion about shortcomings and things you would like to know.

We have a suggestion — how about you ask us questions on topics of interest to you in the Telegram channel? We will collect them and later reply to them in the new issue of Developers’ Diaries.

Like what you read? Give Cappasity a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.