LiDAR data for Feature Extraction

Prem Vijayakumar
NeST Digital
Published in
10 min readOct 21, 2022

Introduction

Maps are an underlying necessity in many of our daily activities. It is the backbone of numerous applications which use LBS (location based services) like Google maps, Waze, Uber, Swiggy, Pokemon etc. The intent varies from navigation to make ready engineering, planning, design, getting real-time information for weather forecasting, national security, defense, town planning, disaster management and even gaming. The highlight is the information embedded in these layers of map, the ease of use and the visual appeal one gets just by looking at the customized interface with a map in the backdrop, rather than going through tons of data in any other form. Have you ever wanted to create a custom map of any features in a large area like a city? If so, the primary step should be to collect the required information by surveying the area. No, we’re not talking about the traditional survey that demands huge man power, hours and hours of manual inspection of the intended features in that particular location. It will be a tremendous task involving huge budget to carry out such a survey. With the introduction of Lidar technology in surveys, it’s altogether a different story.

LiDAR

LiDAR stands for Light Detection And Ranging, commonly known as Laser Radar or Laser altimetry. Similar to sonar and radar which uses sound and radio waves respectively, LiDAR uses the IR portion of the electromagnetic spectrum to measure distances by sending laser pulses at any object and measuring the reflected pulses. The precise time for emission and reflection is recorded and using the constant speed of light, the time difference between the emission and reflection can be converted into a distance. With the very accurate position & orientation of the sensor provided by the GPS and IMU data, the XYZ coordinate of the reflective surface is calculated. The laser pulse emitted from the sensors is used to generate point cloud data as output with millions of elevation data points of the AoI (Area of Interest), which is a highly accurate representation of the true world and the precision is in the range of 3 to 5 cm vertical accuracy/elevation, which is achieved using the GPS technology.

Image: Point cloud data viewed in a 3D viewing application

Global Positioning System (GPS)

The Global Positioning System (GPS), is a satellite-based radio navigation system owned by the United States government and operated by the United States Space Force. It is one of the global navigation satellite systems (GNSS) that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. It provides critical positioning capabilities to military, civil, and commercial users around the world. Although the United States government created, controls and maintains the GPS system, it is freely accessible to anyone with a GPS receiver.

·Inertial Measurement Unit (IMU) which records the roll, pitch and yaw of the platform (vehicle/aircraft on to which the LiDAR sensors are mounted) is instrumental in making the exact positioning of the laser platform possible. They are however, not perfect and lose precision after a short time.

· A very high level GPS, which records several types of signals from the GPS satellites, is used to update or reset the INS or IMU every half a second or so.

· The GPS positions are recorded by the vehicle/aircraft and also at ground station with known position. The ground station provides a correction factor to the GPS position recorded by the platform.

Mobile and Aerial LiDAR survey

By mounting the LiDAR sensors and equipment on to any mobile platform like an SUV, an accurate 3D model of the intended area can be generated in relatively short time by driving the vehicle at an optimal speed. Also we can mount them on a helicopter or drone to do an aerial survey and obtain a 3D point cloud model of the landscape. It depends on the requirement to decide whether to use mobile survey or aerial survey. Mobile surveys are mainly used to collect the details about features like sign posts, utility poles, road features like curbs, pavements etc., which are accessible from the drivable roads whereas the intent of aerial surveys are mostly focused towards landscape modelling, vegetation patterns, flood mapping etc. wherein the geographic features are not generally accessible by road.

This cutting-edge remote sensing technology provides an alternative to the traditional survey methods, which is a herculean task involving lots of manual labour and effort. The key benefit in using this type of survey is the savings in cost and time which is otherwise spent on the manual labour for traditional survey. Also the accuracy of the end product that can be verified at any point of time is yet another factor that favours LiDAR survey. Traditional survey is being replaced with LiDAR survey in most sectors and it is growing in popularity as this the fast and effective way to capture 3D data. Aerial LiDAR survey was used by Kerala Railway Development Corporation Limited (KRDCL) for finalizing the alignment of the proposed 531.45 km semi-high-speed railway (SHSR) corridor from Kochuveli to Kasaragod. Through LiDAR technology, the survey of the entire section was carried out wherein the LiDAR eye picked up buildings, people-inhabited areas, forests, bridges, stone benches, and bushes along the proposed north-south corridor to prepare the complete digital elevation model, digital terrain model, digital surface model, L-sections, C-sections, contour, topographical mapping, vegetation mapping and the processed geographical data was submitted for engineering design of the entire length and lateral construction.

Image: Mobile LiDAR survey

Image: Aerial LiDAR survey

Point cloud data

The point cloud dataset generated using LiDAR technology is a typical representation of the real world and each individual point has geographical details like latitude, longitude & elevation (x, y & z coordinates respectively). Once the anomalies in the 3 directions that may have crept in the Lidar data during collection caused by the pitch, roll and yaw of the collection platform (vehicle) are removed by calibrating the data, we can locate and represent any object in the point cloud data with high level of accuracy. The common file formats of Lidar data which are widely used for feature extraction purpose are .LAS, .LAZ and .POD. These files have significant file size owing to the number of points and the information embedded in their header files. The collected data is then segmented into smaller files for ease of loading and file handling using applications like Geo Queue, Terra Scan or TopoDoT. The calibrated LiDAR data is then loaded into any application like NCloud or Bentley’s Microstation and used as a backdrop/reference for feature extraction. What exactly is feature extraction has been detailed under the next session.

Image: 2 views of the Point cloud data viewed in NCloud

Feature Extraction with reference to Point cloud data

Feature extraction in GIS refers to the representation of the real world features in a digital map environment using basic geometries like point, line and polygon. For instance, a school can be represented using a point feature and related information like primary, secondary, public, private, address etc. can be attached to the point feature as attributes. It will also have the basic location details like latitude and longitude. For two dimensional data capture, ArcGIS, an ESRI’s product is the commonly used application wherein satellite imagery and scanned maps are used as reference for data capture. However for 3D data capture, Bentley’s Microstation or NCloud is used as it requires a powerful spatial engine that can effectively manage the rendering of such a huge data with millions of data points during operations like pan, rotate etc. After loading the point cloud data in the application, feature geometries are captured at intended locations by snapping on to the point cloud data using specific tools available to create geometry. The high resolution photos collected along with the LiDAR collection is stitched to have a 360° view, similar to the Google map street view which serves as an additional reference to arrive at a better decision.

The extent of extraction in terms of feature geometries or information to be embedded on to these features obviously depends on the requirement of the end customer. It could be an inventory of the existing assets for engineering design/construction or some other detailed analysis like loading analysis of utility poles. Whatever be the case, manual intervention/feature extraction is needed to make the point cloud data more useful. The end result is a custom digital map with the required information that can be converted to different data formats. The common formats used for the end deliverables are .GDB and .DWG which are combatable with applications like ArcGIS and AutoCAD respectively.

Image: Geodatabase, the end deliverable of a utility extraction project

NCloud: A platform for 3D feature extraction

NCloud is a unique solution customized by NeST Digital for 3D feature extraction using point cloud data. It’s built over CloudCompare, an open source platform, which was primarily developed in C++ for viewing LiDAR data. By developing a host of custom tools in C#.Net for feature extraction and using an SQL/PostgreSQL database in the back-end to store the spatial data of the geometries captured and the corresponding attributes, the GIS application team built-in a perfect solution for 3D extraction. Combined with the numerous quality check tools developed over a period of time, NCloud provides a solid platform to do 3D extraction on a large scale production environment. It serves as an alternate solution for proprietary software like Bentley’s Microstation, thus invalidating the operational cost in terms of the licensing cost involved and higher probability of winning new project bids.

Some of the major functionalities that were built in, specifically for utility pole extraction module are listed below.

· A tree structure hierarchy that will maintain parent child relation with a pole as the parent feature and all it’s attachments like crossarm, streetlight, insulators etc. as the child features by assigning pole_id as the primary key.

· Capturing HoA (Height of Attachment) of all the feature geometries based on where exactly the geometry is snapped on to the point cloud.

· Workflows for data capture complying with complex data model requirements, data validation and translation in a single environment.

· Support centralized database with seamless multi-user editing capabilities.

· Faster rendering and display of large, colorized point cloud files viz. LAS, compressed LAZ and other industry standard formats.

· Implementation of admin and job manager tools for profile management, splitting, job allocation and status tracking.

· Point cloud data editing tools viz. clipping along AoI, change display settings, change view perspectives etc.

· Multi-user editing by engineers/QA analysts/delivery team.

· Customized tools for feature extraction, editing and attribute update based on each project-specific schema.

· Auto extraction and update of 3D feature coordinates, calculation of feature dimensions viz. length, height, angles and bearings and all relative measurements.

· QA validation checks to flag issues w.r.t feature geometries, relationships, attributes and relative offsets.

· Data export facility to geodatabase/JSON or any other client specific format.

· RANSAC algorithms developed for auto extraction of tapering and lean of features, shapes and other measurements.

· ANSI specifications for auto update and validation of pole class, material type and other characteristics.

· Scalable architecture to suit varied data collection requirements from one company to another

Image: NCloud interface with a utility pole features

Case Study

A major telecom company and an ISP giant was planning to rollout a 1 GBPS high speed fiber network across major cities in US, back in 2016. Scope of the project involved developing seamless 3D GIS database of utility pole network from high accuracy digital LiDAR data, which can support in the pole loading and clearance analysis for FTTH (Fiber To The Home) licensing requirements. The loading and clearance analysis of utility poles necessitate more than hundreds of measurements and attributes per each pole, including locational details and other quantitative and qualitative information w.r.t. poles and their attachments. Conventional field level investigations or digital photogrammetric workflow approaches do not provide feasible solutions in time-bound projects. NeST along with their partner in US proposed the lidar based solution and were responsible for the 3D lidar collection along the entire fiber network routes.

The projects consisted of multipronged operations ranging from collection of remote sensing data through mobile lidar systems, data capture of pole and attachment details (spans, crossarm, insulators, power equipments, streetlights, guys/anchor, midspan/service drops etc) from point cloud, auto updates of different clearance parameters, pole loading analysis and MR design as well as the final submission for approval.

The solution developed by NeST involved schema design and development of custom data models for each client based on their specific requirements, application development for the workflow management and data capture from lidar point cloud, tools for auto update and validation of quantitative parameters, tools for pole framing based on electric transmission standards, QA/QC tools, and the final database migration to GIS/Oracle/SQL Server DBs in the client repository. Multi-tier QC mechanisms with help of validation tools adopted at each stage of production to meet the rigorous accuracy specs necessitated by the pole clearance and loading standards. The final outputs are delivered on a fiber-hut basis (fiber hut is a distinct geographic area identified for providing the FTTH services) for each city/district. A total of around 50K utility poles and their associated attachments have been captured as part of this project.

Image: Utility pole features extracted using NCloud

Credits

Haridas D, from GIS team has contributed in preparing the story line and structure of this blog, which I had used as a base structure.

References

1. https://www.propelleraero.com/blog/drone-surveying-misconceptions-lidar-vs-photogrammetry/#:~:text=3D%20point%20cloud.)-,How%20photogrammetry%20differs%20from%20lidar,overlap%20and%20sufficient%20ground%20control.

2. https://www.thehindu.com/news/national/kerala/processing-of-lidar-survey-data-begins/article30795623.ece

--

--