Automating simulation for safer self-driving

Written by Ferenc Pintér and Alexandre Engelstein

aiMotive Team
aiMotive
16 min readAug 30, 2019

--

Originally published on AImotive Insights, May 10, 2019

Abstract

The paper details how automated testing with simulation tools can accelerate the development of automated driving solutions, while also making them safer. The importance of simulation is showcased by examining points of interest in AImotive’s development pipeline. First, while detailing the demands of simulation for autonomous driving, the need for a comprehensive content library in simulation is stressed. Second, the uses and limitations of simulated training data for neural networks are touched upon, followed by how simulated scenarios can and should be defined based on real-world situations, alongside functional safety engineering. Finally, two case studies provide insight into two areas where aiSim has already proven to solve problems and accelerate AImotive’s internal development efforts of the aiDrive self-driving stack.

Keywords: simulation, scenario testing, self-driving, training data, automated development pipeline, test data collection, regulating autonomous technologies

Introduction

Bringing automotive solutions to customers around the globe is a complex task. One that involves years of research and development, various engineering departments, verification and validation efforts. As the complexity of automotive systems increased so did the complexity of their development processes. The multiplication of tools being used to create them has led to different tools being used for development and testing than those used for verification and validation. In many cases, the selected toolsets have not been created to be used in parallel, and vital resources are deterred from development to ensure their interoperability. This means that the tools being used by different teams are in the way, limiting R&D efforts, rather than accelerating them.

For AImotive, the solution has become simulation, a vital technology proving itself in the new use case of automated driving. While the automotive industry has relied on simulation for a wide range of tests, its use for verification and validation processes remains challenging. To be truly efficient, simulation should be seen not as a simple tool but as a defining element of the development process. One that can be used for verification and validation.

aiSim has been developed in-house by AImotive to serve this purpose. The simulator engine was built from the ground up for autonomous vehicle development, and a robust backend enables automated scenario testing and data collection. In this whitepaper, we will examine how simulation is incorporated into the AImotive development process and how it can solve the new challenges automated driving development teams face.

Automated testing powers autonomous development

Under the last eighteen months the automated driving industry has recognized that simulation is not only a tool to accelerate development, but the only approach to achieve safe self-driving

within viable financial limits and timeframes. However, several stakeholders remain at odds with the proper use of simulation, weighing its benefits and limitations. Such players do not want to run the risk of either being too optimistic about simulation technologies and compromising on safety; or being too conservative and lagging behind their competitors in increasing the safety of their vehicles through automation. A balance has to be found, and companies with an understanding of the complete self-driving challenge are best positioned to find it.

Commercial aviation is a leading example in the widespread use of simulation. The industry continues to boast significantly fewer injuries or fatalities per mile travelled than road travel. Due to the advanced simulation techniques that have been employed in aviation for decades, there are situations in which simulation is accepted as a complete proxy for real-world verification and validation. If a bug is fixed and tested in a standardized simulator, it can be deployed commercially. It is our belief that the automotive industry should learn from aviation and strive to maximize the use of simulation. However, continuous real-world verification and validation of self- driving solutions is needed due to the nature of public road travel, and because there are always multiple pilots in an airplane, while future autonomous vehicles will have no drivers.

Closed loop training process

Simulation is at the core of the AImotive development pipeline and pushes our automated driving technologies towards productization safely.

By creating aiSim and incorporating it into the development of aiDrive, our self-driving software, we have created a development pipeline aided by automated verification through simulation, data gathering and easy content creation. At different sections of this pipeline aiSim serves as a platform for verification and validation. As a result, aiSim has proven itself as a viable tool to serve the complete development process in pre-development projects. In the following, we will detail the key characteristics of simulation for the development of automated driving, the problems it can solve, and examine how we correlate simulation with real world tests.

Filling the virtual world

Development teams would have little trouble solving self-driving, if autonomous cars were the only actors on the road. We know all too well that this isn’t the case. Cars share streets with pedestrians, cyclists, unpredictable human drivers, and at times roadworks, wild animals, fallen trees or any number of obstacles. For self-driving development, a simulator must be able to reproduce the diversity of our world focusing on the demands of automated driving solutions.

As always, the devil lies in the details: what elements of reality are pertinent to the verification and validation of automated driving solutions? To answer this, two major challenges have to be tackled: a realistic modelization of the world phenomena, and a comprehensive variability of the world representations.

Rendering and all sensor implementations should be physically- based so engineering teams are able to measure the performance of perception algorithms and simulate sensor characteristics.

As a result, the simulator should provide the self-driving software inputs that represent those of the real world as closely as possible — in areas which matter to detection algorithms.

Pedestrians cross the road in changing conditions in aiSim’s virtual Las Vegas. Simulating the diversity of the real world is one of the great challenges of simulation for the development of autonomous driving.

Second, the simulator must ensure deterministic running. In other words, the same software version, tested in the same scenario should always provide the same result. Determinism ensures repeatability, a fundamental requirement of regression-free development. Finally, the perception and recognition of automated driving systems is fundamentally connected to the ability of their sensors to perceive the environment reliably. Thus, when testing, it is vital to simulate the effects of blinding sun glare on a camera when driving due west on a highway at sunset.

The variability of real-world features encompasses not only all kinds of weather but also different static and dynamic local rules and assets. Around the world different jurisdictions use varying road markings, traffic signs and even lights. Drivers and pedestrians behave differently in different cultures. Not to mention that a system deployed in Australia must be able to recognize a kangaroo, while it will never encounter one in Europe. Human drivers can intuitively adapt to different locales, whereas self-driving software are sensitive to these differences and have to be programmed to recognize them.

Because the occurrence of extreme scenarios is too rare to test efficiently in the real world, the only viable alternative is simulation. On the one hand simulators should contain a wide range of modeled real-world environments to increase correlation between simulation and real-world tests. These should include urban areas and highways from different geographies. On the other hand, modeled locations can also increase testing safety, by allowing teams to test new functionalities in virtual environment that mimic the areas in which road tests will be conducted.

Simulators can be used for verification and validation, so should offer procedural environment generation for fast scenario creation, and high precision map imports for safety validation.

Building on our experience from developing the aiDrive software stack, the points above were key considerations in the development of aiSim. To ensure our simulation team was not swamped by the sheer number of the factors to be considered, we elected to follow an incremental development path. As a result, aiSim’s customers, including aiDrive, influence the development roadmap, bringing vital areas into focus. Whenever an error or disengagement that cannot be recreated in aiSim is encountered new features are added to the simulator to ensure simulated testing. However, all developments adhere to the basic requirements of determinism and physically- based rendering. Through a very similar process our pipeline verifies the simulator itself. Our team constantly monitors the correlation of real-world self-driving software states with software states from simulated scenarios run in different versions of the simulator to ensure to achieve the degree of correlation.

Simulated training data supports fast-paced prototyping

Once the virtual environment is available and can be populated with the desired characters, simulation-aided development can begin. Artificial intelligence is a core solution of many automated driving systems. Due to the ability of these networks to abstract patterns from known data, they are more robust than traditional computer vision or formal algorithms for several use cases.

This flexibility comes at a cost. Networks are trained on enourmous amounts of pre-prepared data. Data that has to be gathered and annotated before being fed to the neural network. Real-world data collection and processing is a time-consuming process, however, for development purposes, data created by the simulator can be used.

A simulator can generate tens of thousands of images that are automatically annotated during the rendering process overnight, meaning countless work hours can be saved.

The performance of neural networks that have been trained on simulated data can be tested in virtual scenarios. This allows for rapid iteration on various solutions, algorithms and sensor setups, drastically cutting the time it takes to get a new functionality out onto public roads.

Simulated data can be used to train neural networks to a certain maturity level but regardless of how realistic a simulator is, it remains a different environment to the real world. For automated driving, even slight discrepancies can pose a safety risk. Hence, before public road testing commences with our test vehicles, all networks are retrained on data collected in the real world.

However, there are use cases when simulated data can be used for training before real-world deployment. aiSim supports the creation of mixed reality data. Thus, the simulator can be used modify images taken from the real world, similarly to augmented reality systems. For example, if a development team needs images of deer in different situations these can easy be generated as long as the light sources and shadows of the original image are known. Doing this is as simple as placing a silver sphere monitored by a camera next to the sensor completing the data collection, and building a proxy model of the environment in 3D with only the main shadow-casting features. Relying on mixed reality data allows researchers to train networks for rare situations, which have limited data collection possibilities, thus increasing the safety of self-driving solutions.

Scenario testing — Making virtual miles meaningful

One of the major advantages of simulation compared to testing in the real world is the ability to focus on interesting miles, thus saving a considerable amount of time. Rather than having cars circulating and waiting to encounter unique situations in the real world, interesting test cases — so called scenarios — can be created in the virtual world.

A scenario is a predefined traffic situation with designated pass criteria and the goal of measuring the self-driving system’s ability to handle certain tasks. For example, when developing a lane keeping assist, engineers will measure the software’s ability to recognize lane markings in the distance, to actively control the car within lane limits, and the smoothness of the ride. To do this, a number of scenarios will be created to stress test this function. A simple scenario in this case would be selecting a challenging curvy section of highway. The system passes the scenario if it successfully stays in the designated lane for 1 km. This scenario can then be permuted with different traffic densities, weather conditions, time of day, road banking and curvature, or even sensor error characteristics. The ability to repeat tests countless times and measure the positive or negative effects code changes have on the system drastically accelerates development times.

aiSim includes a standalone Scenario Editor to build new scenarios and templates, which can also be written in a procedural language. We follow three unique approaches to create a varied database of scenarios for testing and development purposes. The first is that described above, scenarios defined to verify a select feature of functionality based on functional safety engineering and a well- defined set of requirements. The second is to create scenarios based on dangerous situations and corner cases deduced from the Euro NCAP and NHTSA road accident databases. These prepare the software stack for situations common on public roads but not directly assessed by the tests of a certain functionality. Finally, scenarios are taken directly from the real world, based on the situations AImotive’s fleet of test vehicles encounter and cannot handle.

The aiSim scenario editor aids the quick creation of new scenarios with a visual interface and several customizable settings. Scenarios are built based on one of the three methods detailed.

The first two methods of scenario creation are fundamental parts of the R&D process.

Still, it is the ability to take real world disengagments and create virtual proxies to help in solving them that makes aiSim the center of our development pipeline, and more than a simple tool.

It is enough for one of our test vehicles to encounter a certain situation only once in the real world, based on which our teams can begin implementing fixes and testing them in the simulator. Only a deterministic simulator can serve this purpose efficiently, as it guarantees that the differences between different runs are down to the changes made to the self-driving software only. Taking situations from the real world and fixing them in simulation can also be more abstract, with simulated scenarios serving as proxies for real world situations. If a pattern can be identified in the disengagements of the autonomous system, then scenarios that mimic that pattern can be easily created. As a result, simulation can be used to solve wider challenges not just unique situations, as detailed in one of the case studies below.

Automated testing

Once all ingredients, both technical and content, are available, simulator runs can begin. Simulator testing happens on multiple levels and at several points of the development cycle. All changes to the code base of our self-driving system (and new developments for the simulator) are tested against a selected group of scenarios to ensure regression-free development. These pre-commit tests happen after code reviews and are completely automated, leaving no room for personal interventions.

Pre-commit tests are followed by module tests, in which only the functionality being tested runs in within the self-driving stack. All other functionalities are replaced by ground truth data provided by the simulator. This allows all our teams to verify that their developed module meets key performance and stability criteria without having to examine other modules as the possible causes of a failure. Finally, the whole software stack is tested through a series of different scenarios created in-line with the processes detailed above.

Tests concentrating on benchmarking a certain functionality are augmented by nightly and weekly test cycles that measure the overall stability and performance of the aiDrive stack against a large library of different scenarios.

Test processes are completely automated and handled by the aiSim backend developer toolset.

However, developers can also access the simulator on-demand to run test sets that fit their specific needs. To learn more about how aiSim supports the safe development of self-driving technologies read our white paper entitled Ensuring Safe Self-Driving.

Through these automated processes thousands of scenarios are run within an hour. The aiSim backend collects enormous amounts of data. This includes not only the results of all scenario runs, but any information vital for the benchmarking of self-driving solution including sensor inputs, software states, actuator commands etc. This cloud-based database is accessible to our development teams and partners. Beyond visualizing important statistics such as pass rates, various filters help engineers sift through the data to get meaningful insight into the causes of failed scenarios.

Case studies

Through creating not only self-driving solutions but the complete development toolchain, AImotive has secured unique know-how. Our technologies mutually support each other, with aiSim not only aiding the testing and development of aiDrive, but the self-driving software stack pushing the simulator’s functionality forward. The technical requirements listed above were all taken into consideration during aiSim development. In the following we provide two case studies of how aiSim was integrated into our development processes. In the first we examine how a completely new module was created and adapted to the demands of a certain location utilizing aiSim for both data generation, research and development and finally verification. The second details how simulation can serve as a proxy in solving problems encountered in the real world. Both use cases are based on the ability to compare the self-driving software’s reaction to simulated data versus real world data. If the state of the self-driving software is the same in both then the simulation can be considered a reliable proxy for the the real world. Thus, aiSim drastically accelerated our development efforts.

Lane detection module

Accelerating the development of new functionalities is one of aiSim’s core use cases. One of the first instances of AImotive utilizing the technology in this way was the creation of the new aiDrive lane detection module. All research and development was supported by aiSim. Iterations of the neural networks used for the module were trained on batches of simulator-generated training data, while their performance was continuously benchmarked on a purpose-built scenario set. Through this approach our team completed several iterations on the software stack within a matter of weeks. Only the most mature solution was then run on public roads, following extensive scenario testing.

Following the aiSim-aided workflow, a mature new lane module was deployed in our test vehicles only six weeks after R&D connected to the project begun.

Compared to the months it could have taken to deploy new code following our previous approach, aiSim can be considered to have more than halved the time to public road testing.

However, when deploying the module in real-world testing our team encountered another problem. Following a series of spikes and anomalies in sensor inputs the road model would begin to collapse. Disengagements of this kind were only recorded on Highway 101 in California but following analyses of the available data we were able to discern that these were due to sensor oscillation caused by the quality of the roadway. A collection of both location specific and generic scenarios was created to solve the problem. Results showed that a positive change in the generic scenarios translated to the location specific scenarios and then real-world tests, leading to the camera oscillation correction solution incorporated into aiDrive. The solution to what began as a location specific problem is now deployed in all aiDrive builds to support safe operation on bumpy surfaces.

Mirroring the real world: aiDrive driving in the real world on the left, and in the simulated recreation of the situation on the right.

Disengagement tracking

As detailed above, several scenarios in aiSim are created based on real situations AImotive test cars encounter on the road. AImotive’s fleet is constantly being tested in France, the US and Hungary. All tests are fully recorded and synced with our servers. Similarly to how our engineers can comb through data from simulated tests an array of visualization options and filters support the processing of this data. Based on these records, and direct input from automotive test engineers in our test vehicles, disengagement patterns and spikes can be identified.

A key challenge our team faced was a spike in disengagements of the “catching up” type. On reviewing the available data and sensor feeds our team discerned that these were caused by closing in on white trucks seen against the background of a white cloudy sky. Based on the data it was also obvious that the phenomenon was not location specific, as disengagements of this type happened at all testing locations.

aiDrive R&D was able to take this information and begin coming up with a solution to this problem. Simultaneously, the simulation team created a set of scenarios that could serve as proxies for similar situations. Developing solutions were continually benchmarked against these situations until an acceptable level of maturity was achieved and the number of “catching up”-type disengagements dropped significantly in simulated tests.

Once R&D and Safety teams were satisfied with its performance, the new version was deployed at our test locations.

The results of simulated tests were validated on public roads. The ratio of “catching up”-type disengagements dropped to 15% of all errors, compared to the 58% seen previously.

The same approach can be employed for all non-location specific errors encountered. In these cases, it is enough to create proxy scenarios, that mimic the situation encountered in the real world. Naturally, these proxies must first be tested against the original version of the software, to ensure that they are failed, and that the cause of these failed runs is the same as that of the real-world disengagements. This can be done by comparing the state of the self-driving software leading up to and after the disengagements in the real world and simulated scenario respectively.

Conclusion

aiSim is a driving force behind AImotive’s self-driving development efforts. Due to our understanding of many levels of the autonomous challenge aiSim has grown beyond being just a simulator and has become a powerful automation tool in our development pipeline incorporating data gathering, processing and supporting real world testing. It is not only our belief but proven that closely integrated test-driven development and simulation enables quick root-cause analysis and function updates. Meanwhile, simulation also serves as a safety barrier allowing only sufficiently mature self-driving software versions onto public roads. Ultimately, relying on deterministic and physically-based automotive grade simulation makes autonomous systems more robust, while accelerating the time to market of automated driving solutions.

Our aiSim aided pipeline has been successfully proven in R&D and pre-development, while our team continues to explore how the workflow and toolchain will have to adapt to production solutions. We believe the automotive industry should learn from aviation and move towards accepting simulation as a viable proxy for real world testing. Naturally, this is a standardization process that requires the cooperation of not only industry stakeholders but regulatory bodies.

It will be such standards that can address questions pertaining to the link between reality and proxy scenarios; the required quality of simulation; and to what degree simulation testing can be correlated to the real world. Standards and regulations will also have to define the number of meaningful miles a solution has to cover before being allowed into production, what constitutes a meaningful mile, and when a system is mature enough for public use. AImotive welcomes such standardization efforts and we will continue to cooperate with our automotive and regulatory partners to support the creation of efficient simulation tools for the advancement of a safer automotive future.

--

--

aiMotive Team
aiMotive

aiMotive’s 220-strong team develops a suite of technologies to enable AI-based automated driving solutions built to increase road safety around the world.