How Legacy Automotive Simulation Tools Differ From Metamoto’s Scalable Cloud-Based Simulation Solution

Until now, companies and organizations working on automated vehicle (AV) development have been limited to using hardware-in-the-loop (HiL) style legacy simulation tools. These products, originally developed primarily for vehicle dynamics research, have had basic sensing capabilities added to them. While useful in early prototyping efforts, concept exploration and HiL bench simulations, they are largely constrained by their underlying architecture and limited OS availability.

These limitations restrict their ability to scale to the level needed for comprehensive coverage-driven verification. By verification, we mean finding and fixing bugs, or undesired behaviors in the AV system stack, before they cause the vehicle to fail or crash. Verification is a never-ending engineering process that when well-executed, makes efficient use of limited resources to find and fix the greatest number of critical problems.

While there is some variation between these legacy tools, all share the following common characteristics:

  1. The need for an expensive GPU-enabled workstation
  2. A reliance on Simulink as the mechanism to integrate 3rd party code
  3. A sequential execution of test cases
  4. The need for custom scripting to enable test automation
  5. A lack of OS support beyond Windows

This is the general architectural block diagram that these legacy systems use (see Figure 1).

Figure 1

To see how these systems differ from our cloud-based simulation service, we’ll use the following example. Let’s assume we want to test an adaptive cruise control and automatic lane keeping system, similar to Tesla’s Autopilot or GM’s Cruise, against a test series consisting of 1,000 discrete cases.

First, we need a reasonably fast GPU-enabled workstation running Windows. We also need software licenses for the simulation tool and Simulink. The hardware and software costs around $45,000.

Using the legacy tool, we load a generic test scenario consisting of a road network, our ego vehicle, other vehicles and pedestrians, signaling devices, and buildings. If the scenario does not exist, we will need to create it. Once the scenario is complete, we then generate a corresponding Simulink file into which we integrate our ACC\LKA algorithm.

Next, we define and create a test matrix capturing all of the variables we want to manipulate from run to run; things like ego and other vehicle speeds, weather, lighting, road markings, etc. Using Excel, we generate a spreadsheet that looks something like this (see Figure 2), with variables represented in columns and rows representing discrete test cases to be run.

Figure 2

Each row of the spreadsheet needs to be filled out for all of the variables we define. Once the test matrix is complete, we return to our test scenario and assign tags for each of the variables defined in the test matrix in order to link variables from the test matrix to variables in the test scenario.

We then need to code a custom test automation script that does several things: reads the test matrix into memory, determines the number of individual test cases to be run and sets up the looping structure so that we can serially execute the test cases. Next, the script sets the variables and starts the first simulation, as defined by the first row of our test matrix running. It starts Simulink and the legacy simulation tool runs in a co-simulation mode, with the legacy tool generating the virtual world model and virtual sensors, and the Simulink model running the algorithm or code under test. On completion, the script writes the test results out to disk, loops to the second row in the matrix and launches the next simulation. This process continues until all 1,000 individual cases have been run.

In a presentation given at the MathWorks Automotive Conference in May 2018 titled “Design and Test Traffic Jam Assist: A Case Study”, the author described using a Simulink-based simulation to fine tune the parameters for an ACC and LKA application in a manner very similar to the example just described. Eleven tests were run with a total simulation time of 3,541 seconds. The shortest test time was 245 seconds, and the longest was 398 seconds, yielding an average test time of 322 seconds.

Using this run time average, we would need a little less than 4 days to complete our entire 1,000 test case series, assuming the simulations run continuously for 24 hours a day. Of course this doesn’t include the time required to create the scenario and generate the Simulink model. It also doesn’t include the time required to integrate the algorithm we want to test, define, and populate the test matrix with values, and assign tag names and code the test automation script. Let’s be generous and assume we can do all of this in one day. So every week, we can run 1,000 simulations, and while they’re running we’ll be analyzing the data from the prior run, using some TBD analysis tool.

Assuming 48 working weeks a year, we could theoretically run 48,000 simulations at a cost of about 0.94 cents per simulation, not including labor.

How does Metamoto’s Simulation as a Service stack up?

The only hardware needed is a computer running on the OS of your choice, with a browser, because everything is accessible through a web-based user interface. You can use your current computer which means an effective hardware cost of $0. You also need a subscription at a cost of $1,500 a month or $15,000 a year. A monthly subscription includes 100 simulation machine hours, enough to run our 1,000 simulation test case, so our hardware and software costs $1,500.

With Designer, we load a generic test scene consisting of a road network, an ego vehicle and buildings. We add other vehicles, pedestrians and signalling devices to create a base test scenario. Like we did with the legacy tool, we define and create a test matrix, capturing all of the variables, but we do it in a very different and much more efficient way.

In Designer, we simply check the box next to each variable we want to use in building our test matrix: ego and other vehicle speeds, weather, lighting, road markings, etc. In Figure 3 below, the variables for time of day, road markings deterioration, clouds and rain have all been checked.

Figure 3

The connection between the algorithm to be tested and the simulation scenario is made with Docker virtual containers and protobuf API’s. After saving our base scenario we jump to Director, where we create our test vectors and suites and run our simulations.

We parameterize our base scenario using the variables we checked in Designer, setting a minimum and maximum value for each variable along with a step size. We select a testing strategy: exhaustive, random over parameters, edges only or single case. All of this is done with a single click of the mouse, and then Director automatically creates our test matrix. No test automation scripting or coding is required (see Figure 4).

Figure 4

In only a couple of minutes we’re ready to start our simulations. Director automatically takes care of all parallelization and scheduling of test runs. The simulations are launched and run in parallel. Using the same average simulation time of 322 seconds, from the “Design and Test Traffic Jam Assist: A Case Study” we referenced earlier, we would need a total of 89.4 simulation hours to run the 1,000 individual test cases. Since everything executes in parallel in the cloud, we’ll have all our results in about 5 minutes instead of 4 days. Our simulation capacity scales massively, and we only pay for what we use.

Test results are stored in the cloud with results color coded so we can quickly focus on tests of interest. Results are viewed using Analyzer, Metamoto’s replay and visual debugger (see Figure 5 and Figure 6).

Figure 5
Figure 6

Using Analyzer, we can synchronize all of the sensor data streams, vehicle dynamics time and/or signal traces, and log file messages in an interactive playback window, giving us the ability to debug our AV stack or code performance with a high degree of granularity (see Figure 7 and Figure 8).

Figure 7
Figure 8

Metamoto can show you what revolutionary, not evolutionary, AV simulation looks like and help you get started. A clear overview of the differences between legacy tools and Metamoto’s scalable cloud-based simulation solution is below (see Figure 9).

To request a product overview, please contact: metamoto@metamoto.com

Figure 9