Sensor Fusion

Rafael Castro
Applaudo Tech Blog
Published in
5 min readMar 10, 2022

Introductory reading for sensor fusion. Some of its common applications focus on Lidar, and how to sense and generate point clouds.

Simulated highway (Use case: Self-Driving Cars)

Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually.

PCL documentation:

In a simulated highway environment, mostly with C++ (However, there are already developed some python libraries for pcd’s), we can easily render point clouds. In the image above, note how the car is able to sense through lidar, measuring the distances between the car and the obstacles using the lidar from different rays being cast.

Lidar rays cast in a simulated highway

For this article, no code is required (however, you can find interesting repositories down below with some useful examples), as this is a review only and the sole purpose is to unlock and envision the endless possibilities that Lidar and these specific libraries can provide to several fields such as Self-Driving Cars, Robotics, Flying cars, remote exploration, environmental monitoring, etc. Although, we strongly recommend exploring the mentioned open-source documentation, including this for python, here is one interesting repository and article detailing this, and referencing other similar articles.

Please refer to the lidar_processing notebook

Data graphical representation is key, and the representation of real-world shapes is identifying patterns, outliers, trends, etc., is really useful.

Python PCDs Representation

The data sources for a fusion process are not specified to originate from identical sensors (Indirect Sensor Fusion), for example, in an autonomous system we find very often the utilization of Cameras (including stereo cameras), GPS, Radars, Lidar, sonar, infrared, magnetics, and others.

Self-Driving Car Multiple Sensors (Lidar practical Use-case)

Sensor fusion methods and libraries for both Python and C++ are very sophisticated nowadays, some basics methods and algorithms to conceptualize are:

Kalman Filter representation

Summarizing the fusion process:

Data level — data level (or early) fusion aims to fuse raw data from multiple sources and represent the fusion technique at the lowest level of abstraction. It is the most common sensor fusion technique in many fields of application. Data level fusion algorithms usually aim to combine multiple homogeneous sources of sensory data to achieve more accurate and synthetic readings. When portable devices are employed data compression represents an important factor since collecting raw information from multiple sources generates huge information spaces that could define an issue in terms of memory or communication bandwidth for portable systems. Data level information fusion tends to generate big input spaces, that slow down the decision-making procedure. Also, data-level fusion often cannot handle incomplete measurements. If one sensor modality becomes useless due to malfunctions, breakdown, or other reasons the whole system could occur in ambiguous outcomes.

Feature level — features represent information computed on board by each sensing node. These features are then sent to a fusion node to feed the fusion algorithm. This procedure generates smaller information spaces with respect to the data level fusion, and this is better in terms of computational load. Obviously, it is important to properly select features on which to define classification procedures: choosing the most efficient feature set should be the main aspect in method design. Using features selection algorithms that properly detect correlated features and features subsets improves the recognition accuracy but large training sets are usually required to find the most significant feature subset

Decision level — decision level (or late) fusion is the procedure of selecting a hypothesis from a set of hypotheses generated by individual (usually weaker) decisions of multiple nodes. It is the highest level of abstraction and uses the information that has been already elaborated through preliminary data- or feature level processing. The main goal in decision fusion is to use a meta-level classifier while data from nodes are preprocessed by extracting features from them. Typically decision level sensor fusion is used in classification and recognition activities and the two most common approaches are majority voting and Naive-Bayes. Advantages coming from decision level fusion include communication bandwidth and improved decision accuracy. It also allows the combination of heterogeneous sensors.

Reference. Chen, Chen; Jafari, Roozbeh; Kehtarnavaz, Nasser (2015). “A survey of depth and inertial sensor fusion for human action recognition”. Multimedia Tools and Applications.

Therefore, as sensors “sense” by collecting data from the real world, the systems interpret this data with the help of algorithms, plan based on those outcomes, act following the roadmap, and repeat. Framing the process in Sense, Perceive, Plan, Act, and Repeat.

Conclusion

Sensor fusion has remarkable benefits, such as systems latency reduction, system reliability, data quality augmentation, real-time data sharing, decrease state uncertainty, etc.

As in recent years, the cost of sensors has been decreasing, and the availability of other types of sensors and other data/information has increased, we foresaw a bright future for practical applications, depending solely on their particular use case requirements and other sensor shortcomings to implement. However, in the pace that Lidar Sensors become cheaper, the broader the application use using at full capacity these incredible edge tech abilities.

Self-Driving Cars Sensor Fusion Real World Applications

Other interesting readings for Lidar use cases:

--

--

Rafael Castro
Applaudo Tech Blog

| Consultant | Advisor | Digital Transformation | Innovation | Tech New Normal | Leadership |