Synthetic data to develop a trustworthy autonomous driving system | Chapter 13

Anyverse
Anyverse™
Published in
4 min readSep 14, 2022

CHAPTER 13

Author
Hamid Serry, WMG Graduate Trainee, University of Warwick

In our last post, we discussed the methods we would be using to shuffle our generated dataset with KITTI, and how we planned to use this to gain further insight into how the training performance varied with changing the ratio of real and generated images.

This week is the last post describing our weekly work, and as such we will be going through some of the results from the various tests we have run on these datasets.

Overall Results

Both the sensor simulated image and the rendered image datasets were individually hyperparameter optimized over the last couple of weeks, attempting to get the most accurate results from their respective networks.

This alongside the minimum values of pixel bounding boxes and occlusion (as discussed in the 11th post of the series) was the filtering that was applied during these tests, as well as the shuffling of 25%, 50% 75%, and 100% anyverse data with KITTI (as discussed last week in the 12th post) made up the testing conditions for the results, Figure 1.

Figure 1 — Results of Render and Sensor Simulated datasets against the KITTI dataset

Starting from the left, Figure 1 below shows the baseline from the KITTI trained network tested on KITTI in red, followed by the Render and Sensor networks trained and tested on their own datasets respectively.

There are then 3 batches based on the 25%, 50%, and 75% training shuffles, which are all in the order of Render dataset shuffle tested on itself, Render dataset shuffle tested on KITTI, Sensor dataset shuffle tested on itself and Sensor dataset shuffle tested on KITTI. The final two bars represent the Render and Sensor datasets fully trained on generated images and tested on KITTI.

Figure 2 shows a zoomed-in version of Figure 1, which highlights the smaller variations between the tests. The first point to note is that the two end results trained fully on generated images have been cut off as the values were much lower than the other results.

As a result, in general, the more generated images added, the lower the overall mAP. This is due to the fact that the low mAP of the 100% generated images heavily impacts the outcome of the mixed data.

Looking into the shuffled batches, however, within each test, the Sensor simulated images outperform the Render images when testing on KITTI consistently across all tests.

In two of the cases, it also outperforms the rendered tests which were tested on the shuffled data. The increase is quite substantial, with around a 0.04 increase of mAP in those two batches between the Sensor Simulated shuffled data tested on itself versus it tested on KITTI.

Figure 1 — Results of Render and Sensor Simulated datasets against the KITTI dataset

This indicates that the sensor simulated data is performing better at generalizing its model in order to gain better performance on KITTI even than on its own data.

Conclusion

Although the results from the trained networks on a fully generated dataset were not ideal, a closer look into the dataset shuffled with KITTI gives some details as to the effect of Anyverse’s Sensor Simulation; how it could be used with a more refined dataset to help improve the generalized performance when tested against real-life images.

A closer look into why the performance was lower than expected would be required, and this could come down to: the minimum pixel sizes, distances from the camera to the objects of interest, occlusion levels, falsely labeled false positives (in relation to KITTI’s wrongly unlabelled features), and specific classes being represented and detected poorly.

The work with object detection models is far from over and I hope we can take this analysis and push for more refined results to help break the barrier between real and generated datasets.

Stay tuned for our next post, we will do a final wrap-up of the current project, draw our final conclusions, and put together some ideas to extend our research by pushing the limits of synthetic data.

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.

--

--

Anyverse
Anyverse™

The hyperspectral synthetic data platform for advanced perception