Automotive Simulation Meets AI

aiMotive Team
aiMotive
Published in
3 min readSep 24, 2024

Welcome to the second edition of our newsletter: Automotive Simulation Meets AI. Read Zoltan Hortsin’s blog about aiSim’s General Gaussian Splatting renderer. If you like this blog and enjoy reading technical texts on AI, neural rendering, and simulation, consider subscribing to this newsletter!

aiSim General Gaussian Splatting renderer: a versatile approach to high-quality rendering

Welcome to the second edition of our newsletter: Automotive Simulation Meets AI. Read Zoltan Hortsin’s blog about aiSim’s General Gaussian Splatting renderer. For the first edition about neural reconstruction in AD simulation, click here. If you like this blog and enjoy reading technical texts on AI, neural rendering, and simulation, consider subscribing to this newsletter!

Simulating automotive cameras

In the world of automated driving simulation, achieving high-quality virtual sensor output is a key challenge. All architecture and implementation involve many quality, performance, and feature set compromises. aiSim is no different, but we focused on making the right compromises. Implementing neural rendering very early in our sensor simulation pipeline provides a versatile extension to aiSim’s existing capabilities by keeping as many of the original features as possible.

Looking at the current academic state of neural rendering, all other solutions would impose too many limitations to make the virtual world useful, thus we had to come up with something different and more integrated. aiSim’s General Gaussian Splatting Renderer addresses these challenges and provides a powerful solution that combines speed, flexibility, and exceptional visual fidelity.

We concluded that the original solution was unsuitable for us to handle wide-angle cameras, which is one of the most common use cases in automotive simulation. Therefore, we had to find a way to overcome this limitation.

Overcoming limitations

The original algorithm’s approach to the Gaussian splatting projection introduces several limitations preventing sensor simulation. This limitation originates from the approximation error, which can be significantly large when simulating wide-angle cameras.

On the left is the original solution, which cannot consistently generate images from six cameras — on the right, aiMotive’s solution, which eliminates this problem and can project a consistent picture

This unique approach to the projection affects not only the camera sensors but also other raytrace-based simulations (e.g., LiDAR, radar). This inability to support other sensor modalities is one of the biggest issues with most of the neural rendering solutions when applied in ADAS/AD simulation.

We have rethought the splatting rendering solution to address this and rebuilt our algorithm from scratch. Our solution handles the previously mentioned limitation and assembles distorted images from various virtual cameras flawlessly.

Performance and consistency

aiMotive’s General Gaussian Splatting Renderer maintains performance levels comparable to existing rasterization solutions — making it possible to simulate high-end sensor setups with multiple cameras, even in Hardware-in-the-Loop setups. Because of the generality of the algorithm, you can consistently get the same results from raytrace-based sensor modalities, like LiDARs and radars.

This effectively means that you don’t have to sacrifice runtime performance because the renderer remains fast enough to work at a real-time frame rate.

There’s more…

Additionally, our renderer enables you to move around with the camera freely and use different positions or sensor setups in your simulated scenario without unpredictable artifacts or glitches. It lets you get up close and personal with intricate details for all kinds of objects and surfaces. The number of applications can be increased even further, as the algorithm can be used in physical simulation or even for surface reconstruction.

Our General Gaussian Splatting Renderer maintains performance levels comparable to existing rasterization solutions — making it possible to simulate high-end sensor setups with multiple cameras, even in Hardware-in-the-Loop setups

Our well-structured data recording and flexible rendering pipeline provide a solid basis for simulating physics-based camera, LiDAR, and radar sensors without compromise. We’ll dive deeper in our next blog, so if you’re interested in how this solution can be used for large scenes and how it can reproduce fine detail with better performance and quality, don’t miss our next post — in the meantime, sign up to our newsletter to receive our latest articles. For more details, visit our website or email us.

--

--

aiMotive Team
aiMotive

aiMotive’s 220-strong team develops a suite of technologies to enable AI-based automated driving solutions built to increase road safety around the world.