Deep learning for semantic segmentation of drains from LIDAR data-initial assesment.

Peterfitchcsiro
4 min readMar 26, 2020

--

In my last article I wrote about using OpenCV to identify a drainage network from LIDAR data. The results weren't too bad considering, but I was interested to see if I could do better using deep learning approaches. In OpenCV there are deep learning models which can be trained, but I thought a better way to go would be to look at the better performing segmentation models and see if one of those could be used. A good background article on deep learning for segmentation is this one here by George Seif.

uNet seemed to be interesting, it won the biomedical segmentation challenge in 2015 and if you are interested you can read the paper on uNet here.

So can uNet be used to find drains from LIDAR? Lets find out!

Getting Going

There are a couple of implementations available on Github and to get going quickly I cloned this repo which also has additional background information.

In the repository you will find not only the model, but also sample data, and a jupyter notebook which you can use interactively to test your environment setup and to improve your familiarity with the model.

Once downloaded, a quick look at the test image data is interesting. A couple of images are presented below. Note that the masks are simply the target features in a black and white image mask. The image size is 512x 512, where as the LIDAR tiles are 1001x1001 pixels. With the repo are 30 sample images and corresponding masks that can be used to train the model. You can see below, a sample image and its corresponding mask. If your environment is set up correctly, you should be able to train the uNet model on the sample data and end up with a trained model set of weights, it takes a little while.

Left Original Image — Right training mask.

Once the model is trained, you can test it on some sample images. Below is a test image and the prediction made by the model. Its done a pretty good job.

Test image on the left, and prediction on the right.

So how does the biomedical model work on LIDAR data, straight up, with out any changes. Lets test it. What I did next was take a few of the LIDAR tiles with drains, converted them to a 512x512 grayscale png, and ran them through the trained model. Remember the model has been trained on membrane cellular data not LIDAR. The output images are below.

Left LIDAR tile with drains — Right segmented image.
Second set of images, original on the left and prediction of drains on the right.

From the above, you can see that the model has done a pretty good job. Now comes the difficult part, generating the training data to train the model on the LIDAR data. I will talk about that work, and the improvements to the results in my next post.

Conclusion

In this article, I have presented an initial investigation to use a state of art image segmentation model uNet, applied in the biomedical domain, to a problem in the environmental domain, one of geospatial analysis.

The setup and running of this model, thanks to the work of zhixuhao putting his code on GitHub, was strainght forward. The most difficult part was getting the environment on my machine correct.

The results are looking very encouraging, and I’m keen to see how much better it will work with proper training data.

--

--

Peterfitchcsiro

Peter is a data scientist with interests in machine learning and environmental measurement.