How to perform a 3D Segmentation in Blender 2.82

Lisa Schneider
5 min readSep 8, 2020

--

The availability of 3D shape data significantly increased over the past years due to advances in 3D capturing sensors and 3D modeling tools such as Blender. This naturally leads to a growing interest in the high-level understanding of 3D shapes. Specifically the part segmentation of 3D shapes gives valuable insights into the characteristics of an object by dividing it into its meaningful parts.

Several deep learning frameworks exist to perform a part segmentation of 3D objects. One popular example is MeshCNN [1], which works directly on the edges of 3D mesh representations. To train MeshCNN it is necessary to provide a manual ground truth segmentation that contains all edge labels ordered by their edge ID.

Within this article, I will show how I performed a manual part segmentation within Blender 2.82 and used it as a ground truth for training MeshCNN. Before diving into the step-by-step description, let’s take a quick look at the input that MeshCNN requires.

Naturally, it is necessary to provide all input objects as .obj files for training and testing the model. For each object, there should be a corresponding .eseg file, which collects the class labels per edge ordered after their edge IDs. Each row thereby provides the class of the edge with an edge ID equal to the row number. The .eseg file is used to train the network and is extracted from Blender after performing the part segmentation.

Furthermore, MeshCNN requires one .seseg file per object to compute the test accuracy. It allows that each edge may have more than one correct segmentation class. This is allows smooth transitions between the segmentation classes. Each .seseg file provides a value for all segmentation classes for each edge. Each column with a value greater than zero indicates a correct segmentation class. The values derive from the segmentation labels of the neighbors of each edge. The .seseg files are created after manual part segmentation within Blender.

Prepare Blender

In order to perform a manual part segmentation of an object in Blender, it is necessary to import the objects into Blender. Thereby I would suggest to keep the Vertex Order of your Object by highlighting “Keep Vert Order” under the Geometry section, when importing the object file.

Afterwards create one Vertex Group per class under Object Data Properties by clicking the + sign in the upper right corner. It is possible to change the name of the vertex group by double clicking the group. Since this gets tedious for hundreds of part segmentations, I provided a PartSegmentationToolbox on Github that includes a preparation script for Blender. It automatically creates a number of vertex groups and assigns colors to each one. The provided script prepares Blender for four segmentation classes. Feel free to adapt it to your needs though.

Assigning selected vertices to vertex group “Head”

Part Segmentation

Now we are ready to perform the actual part segmentation. For this we will utilize Blender vertex groups. Each vertex group corresponds to one segmentation labels. To assign vertices to a vertex group select a select tool and highlight all vertices that belong to one class. I personally prefer to use the Circle Select tool with the shortcut “c”. Once all vertices of one label are collected, assign them to their dedicated vertex group. This is done by selecting the correct group and press Assign.

There are multiple hacks to accelerate the manual segmentation. First of all it is possible to select an X-ray view of the object. This view lets you select vertices through the whole object and not only the front side. c

X-ray view button within Blender 2.82

Furthermore, it is possible to remove groups from other groups. For the chore segmentation class for example, I assigned all vertices of the object to vertex group “chore”, afterwards I deselected this group and selected the other groups Legs, Feet, Arms, Head. Finally I removed all groups from the chore group to end up with only the chore vertices.

It is worth noting, that the part segmentation remains when you are upsampling or downsampling the mesh within Blender. Any alteration is directly transferred to the vertex groups and can be directly extracted afterwards.

Coloring

To color up your part segmentation, Blender offers Material properties, that are possible to assign to the different vertex groups. For this, add a new material and press Use Nodes and choose a base color. Then, select the vertex group that should receive this color an press Assign to add it to the group. If you switch to the object view (Tab), the selected part is highlighted in the color you chose. The preparation script provided in my PartSegmentationToolbox on GitHub also creates and assigns colors automatically, if you want to save some time.

Resulting part segmentation performed within Blender

Extract labels

Once you are satisfied with your manual segmentation, check out the extraction script in my PartSegmentationToolbox on GitHub. Just include the paths of your personal location, copy the content of the script and paste it into the python console within Blender to extract the edge labels and create the .eseg file.

After you created all .eseg files, you can run the create_sseg.py script from my PartSegmentationToolbox on GitHub with the path of your project location to create the corresponding .seseg files.

I hope this description helps you to create a manual part segmentation for MeshCNN or other part segmentation frameworks.

References

[1] Hanocka, R., Hertz, A., Fish, N., Giryes, R., Fleishman, S., Cohen-Or, D., 2019. Meshcnn: a network with an edge. ACM Transactionson Graphics (TOG) 38, 1–12

--

--