Machine learning and photogrammetry combined in one software

The Pointscene Diaries
The Pointscene Diaries
6 min readNov 4, 2017

Let’s talk about photogrammetry today. Although it’s nearly as old as modern photography itself, it’s currently undergoing rapid changes due to technical progress. Best example? Using drones/UAVs for photogrammetric purposes. But this time I’d like to focus on something else. Pix4D has recently introduced a new version of their Pix4Dmapper software which classifies 3D point clouds using machine learning techniques. I thought it’s good opportunity to test this new approach. Will we start to rely entirely on machines in photogrammetry workflows from now on? Let’s find out!

First a short introduction to the topic. Point cloud classification is an important step to generate high-quality 3D models and DTMs. Recent advances in machine learning field allow faster and more reliable classification algorithms which more importantly, can be easily applied to any new data. All this makes machine learning very promising approach to try out in photogrammetric workflows. The solution suggested by Pix4D uses supervised machine learning to train classification algorithm. What’s so innovative about this approach? It makes use of both geometric and colour information to improve classification accuracy. The classifier assigns each point to one of five classes (ground, building, high vegetation, road, human-made object) based on computed geometric and colour features.

Image acquisition grid

Dataset

To test classification tool in Pix4D, I decided to use set of images obtained with DJI Phantom 3 drone for processing and point cloud generation. The data were gathered over a residential area outside finnish town Järvenpää. Photos were acquired in a dense grid. In total, there were 961 images captured over this area. I chose this dataset because in this region there were objects from all classes used in Pix4dMapper.

Processing data

Before processing, I created a new Pix4D project and loaded the images. After that, I modified their properties and set coordinate system to WGS84. Image geolocation and orientation were taken from EXIF file. Next, I selected coordinate system for output data and Ground Control Points. Also, I chose 3D maps as processing template (it’s intended for aerial images acquired using grid flight plan).

Firstly, I ran initial processing using rapid mode to get the quick preview of the results and check if everything will work. It turned out that 6 images were not calibrated. The areas captured on these images were densely covered in trees and a sufficient number of matches was not found. Despite this, I decided to proceed without manually calibrating cameras. The whole dataset consisted of so many images that were calibrated correctly, that the lack of just 6 cameras wouldn’t affect the results that much or cause any problems by point cloud generation.

Uncalibrated cameras location

I imported a text file with GCP coordinates into the software and marked them on the images. Next, I ran the full 3-steps processing. Point classification was performed on densified point cloud during the second step. At the end of processing, I got classified dense point cloud, 3D mesh, DSM, and orthomosaic as a result. Quick visual inspection of densified point cloud confirmed that the results were not harmed noticeably by uncalibrated cameras. Points were automatically assigned to one of five default classes: Ground (yellow), Road Surface (grey), High Vegetation (green), Building (purple), Human-made object (blue).

Point cloud visualisation

The result of point cloud classification in Pix4Dmapper

To check classification results, I visualised the point cloud groups in rayCloud. In Pix4Dmapper classes are displayed in predefined colours. Just by looking at the point cloud, I could tell that classification results are pretty good. Only after closer inspection, I noticed some mistakes. Perhaps the most common one was classifying parts of buildings as a Human-made object (1). A few buildings were classified as High vegetation (2). The reason for this might be the colour of the roof, which was dark green. Another quite common mistake was misclassification of a road surface. This happened mostly in the areas where the colour of the surface has changed, due to building casting shadow. In those cases, a road has been most often classified as Ground (4) or Human-made object (5).

Misclassification of buildings and road surface

The real vegetation was well detected and there were not many misclassified points. However, there were some cases when vegetation points created a vertical surface and the algorithm classified them as Building (3).

High vegetation classified as building

On the other side, cars and fences were well detected and classified to Human-made object group (6). To sum up, the classification algorithm worked really well for this data set. Maybe my dataset was similar to training data used for teaching algorithm.

Correct classification of cars and fences

Point cloud editing

Pix4D software enables the user also a manual edition of the point cloud. It can be used as a tool to remove unwanted noise from the densified point cloud or reclassify points to the different point group. In my case, I decided to correct some of the classification mistakes. To begin with, I chose to reclassify part of a street lamp from Building to Human-made object class. All point class modifications are done in Edit densified point cloud mode. When I entered edition mode, I could select chosen points by drawing a polygon around them. Point selection is rather tricky, you have to be careful not to select unwanted points. What I did, was to apply clipping box around the lamp first, then hide points outside the box and select only the points I intended to. Next, I assigned selected points to Human-made object class by choosing it from a drop-down list. Drawing clipping box for each object to be corrected can be a bit tiresome and time-consuming. I suppose using this tools could be a bit problematic when making many detailed corrections. However, it should be enough to remove some noise points. In this case, the user can assign noise points to Deleted group or create a new group.

Steps of manual point cloud editing

Generating DSM and DTM

Generated DSM and DTM

Last but not least, I wanted to check also DTM created by the software. As I read on Pix4D Support website, DTM generation requires merged raster DSM as input and computed classification mask. Visual inspection of DTM showed that there were only a few mistakes caused by outlying noise points. Also, the areas covered in trees were correspondingly less accurately represented in DTM.

Mistakes in DTM caused by vegetation

Conclusions

I must admit that I was positively surprised by the quality of obtained results. Using colour information in new classification algorithm indeed improved the reliability of classification. Since the amount of training data increases, we can expect that the classifier’s quality will improve as well. So after all, it seems that in the future, we can leave point cloud classification to algorithms. Another noteworthy addition is DTM generation using classified point cloud. It really makes the end result more accurate and error-free.

If you want to examine how classification algorithm performed, you can check the point cloud in Pointscene here.

If you’re interested in other photogrammetric software, be sure to check out this text as well.

Check the full scene here

Test Pointscene today

Want to know how to include point clouds in your projects? Visit www.pointscene.com to explore many examples in gallery or start free trial and upload your own data within minutes.

--

--