Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

Michael DeKort
Sep 30 · 14 min read

It is gratifying to see the autonomous vehicle simulation industry moving toward what I have been passionately saying for several years. Now becoming common speak is the messaging of proper simulation as the only way to mitigate the debilitating issues with public shadow/safety driving so we can get to a legitimate L4 in our lifetime without going bankrupt and harming people for no reason. This simulation needs to be a “digital twin” of the real world, especially as it pertains to the “physics.” Some companies have excellent visuals or vehicle models and others have some associated physics for tire/road interactions and other finite details. The problem is that not one company is offering anything close to a complete and proper system, and there are no simply combined current commercial products that get you close. Worst of all, the automotive industry assumes the technology in place is the best that can currently be delivered. That, in turn, propagates the belief that creating a true real-world digital twin isn’t possible. This is simply not true IF you use the right technology.

What is the Best Approach?

Proper simulation (Software-in-the-Loop SIL) should and can replace 99.9% of public shadow and safety driving. That means that there are millions of scenarios that must be learned. It also means that shadow or safety driving can be significantly reduced but not eliminated. Shadow driving does provide critical data and intention testing, as well as provides key data to inform and validate the simulation. The problem with shadow driving is not entirely with safety but one of time and cost. You simply cannot use it for most real-world development because you cannot drive and redrive, stumble and re-stumble on enough scenarios, miles etc. to get close to finishing in a reasonable time frame. And you cannot afford it in time and money. In my opinion, safety driving should be completely eliminated where it involves the public, non-protected or non-professional drivers. This means that when it is used, in the real-world or on test tracks, it is because simulation has proven not to do what needs to be done. And when shadow driving is used, it is made safe; not unlike a movie set. This, by default, will eliminate the need for safety drivers to be human Guinea pigs and Kamikaze drivers (along with their passengers and the public) in learning accident scenarios with damage or death. The practice of experiencing thousands of accident scenarios thousands of times over, expecting the safety driver to not disengage, have the accident, thus causing thousands of injuries and deaths, is a needless, unethical, and immoral practice. (For more on why public shadow driving is untenable see my articles below.)

Why does using Proper Simulation Matter?

This is an issue of matching levels of fidelity to use cases or scenarios. When the right level of fidelity is not used, there will be development flaws. When complex scenarios are run, especially where the performance curves or attributes of any model are exceeded, the Planning system will have a flawed understanding of some aspect or aspects of the real world. The result will be creation of a flawed plan. In far too many scenarios, in the outcome will be creating real-world accidents, not avoiding them or their being worse than they need be. This will usually be caused by some combination of braking, acceleration, or maneuvering being improperly timed or the voracity being incorrect.

The simulation systems that are being utilized today are adequate if you are working on general or non-complex real-world development or testing. However, in order to run complex scenarios, the depth, breadth, and fidelity of the simulation is critical. The Autonomous Vehicle (AV) makers will need to keep track of every model’s capabilities for every scenario to make sure none is exceeded. If AV makers do not do this, especially in complex scenarios, the end result will be the development of false confidence in the AV system. Keep in mind that machine learning does not infer well, not nearly as good as a human as we have a lifetime of learning especially for object detection. Additionally, perception systems right now are far too prone to error. The famous stop-sign-tape test is an example. This means that development and testing must be extremely location-, environment-, and object-specific. You could test thousands of scenarios for a common road pattern in one place or have just a couple object differences, like clothing patterns, and wind up with errors if you think you don’t have to repeat most of that testing in most other locations. This raises the scenario variations into the millions.

What is Proper Simulation?

Real Time — This is the ability for the system to process data and math models fast enough and in the right order, so no tasks of any significance are missed or late compared to the real world. This needs to be extended to the most complex and math-intensive scenarios. And by math-intensive, I mean every model must be mathematically precise as well as be able to properly function. That could be thousands running at a time. This is where gaming architectures, which are used by many simulation companies, have significant flaws even with the best of computers. The best way to architect these systems is to build a deterministic or time-based and structured architecture where every task or model can be run at any rate and in any order. Most systems out there are non-deterministic and just let things run. They will say modern computers allow them to do it so fast that the structure I described is not needed. I believe this is wrong. (At an event I attended at Jose De Oliveira, Unity Technology’s engineering manager for autonomy, spoke after me and confirmed my point of view.) The systems that are deterministic run everything at one time at one specified rate. You can see accommodations made for these issues in gaming. Their play box is less than 60 sq miles and where they can avoid physics or math either by eliminating them, like when you can walk through trees, or dumbing them down as much as possible.

Model Types — The critical model types are: The ego-vehicle, tires, roads, sensors fixed and other moving objects, and the environment. Each of these needs to be virtually exactly like the target it is simulating. (Geo-specific vs geo-typical for example. The best technology can get this down to under five cm of positional accuracy.) This includes not just visual aspects, if they are applicable, but physical capabilities. You need to simulate or model the exact vehicle, sensor, object, etc. Not something like it or a reasonable facsimile. As I said before, this is important because machine learning does not infer well. As I stated above, this means the same road patterns will have to be developed and tested in a wide array of locations, at different times of day, in different weather with different signage, etc. All of this must be extremely articulately modeled both visually and physically.

Take radar, for example. You must simulate not only the Ego-radar but how the world and other systems interact with it. Every other radar or system emitting RF that would cause clutter or interference must be properly modeled as does how every radar’s signal is being affected by its environment. The reason for this is that the Ego-model’s received signal must be an accurate model of the culmination of all of these factors at any physical location in the environment or scenario. This is where I would like to address vehicle models. The Original Equipment Manufacturers (OEMs), simulation and simulator companies have been creating detailed vehicle models for some time. However, I would caution against assuming they are precise enough in all scenarios — especially in complex scenarios as the simulation companies have not likely instrumented these vehicles in all the relevant scenarios required here to ensure the performance curves are accurate. And keep in mind this is not just a function of the vehicle design data or specs. The model structure itself or the overall system being used could be flawed.

Another example is friction coefficients as they are applied to the road surface and tires. Each tire will experience a different part of the road. Those parts could contain segmented friction values based on the surface composition. It could be dry or wet, be painted, have oil or gravel on it, or any combination thereof. And that combination can be in varying segments under the tread pattern. The models need to properly account for this. Good models can divide these areas into segments of less than a centimeter. (Puddles would be an extension of this. Like the real world, a puddle can cause you to hydroplane or increase friction so much that it pulls your vehicle to the side of the puddle’s location.)

Finally, there is the issue of moving object or traffic modelling. Most of these systems are based on traffic tools that were not meant for the capabilities required here; most notably, moving objects naturally. Vehicles sliding as if they were on ice skates vs turning the wheel presents a flawed view to sensors. All objects need to move like the object they are modeling.

Full-motion Driver-in-the-Loop (DIL) Simulator — When the real-world vehicle is replaced by simulation, you must use one of these devices to properly develop and test the system. This is whether reinforcement or imitation learning is used. The reason for this is humans cannot drive properly nor evaluate proper driving without motion cues or pressure on their bodies or inner ears in several classes of scenarios. The easiest to understand is loss of traction. It is simply not possible to drive or evaluate a system when you are driving in the snow without feeling what is going on. Other examples of this are where there is complex maneuvering, steep grades, running over something, or even bumping or being bumped by something else. Since it is desirable to run simulations faster than real time, the use of this device will be minimal. However, the scenarios in which it should be used for are critical.

(An example of proper real time where ground vehicles are concerned is where there is never more than 16 msec of latency from a driver action to the visual, control, or motion system of a full-motion simulator.)

Ask for Proof

Unfortunately, here is where the industry is at regarding simulation:

· The simulation companies do not know what capabilities are necessary or possible.

· The simulations companies are not utilizing the right or best technology by choice.

Whether unintentional or not, this results in misleading product information being conveyed. That will result in false confidence, flawed Planning, and real-world errors and tragedies. Given this, it is imperative that proof of model fidelity and real-time performance in a wide array of scenarios be provided, reviewed, and confirmed. I know of no simulation company who currently provides this data. (Begging the question: why not if the data is accurate and complete?) This information is critical both in the cases where you want or need to use a true digital twin and where you do not believe you have to do so, but want to ensure you have no negative impacts of that decision.

Some of the ways to validate the models include using the source data like instrumented performance data for vehicle performance, technical data from vendors, High Definition (HD) mapping data, Hardware-in-the-Loop (HIL) testing, satellite data, and, most importantly, data gathered from shadow driving.

Cloud-Based Systems

Cloud-based systems can be treated like local instances with regard to the points I have made here, except for where the DIL simulator is involved. Getting that latency down to 16 msec is probably not going to be possible, especially in complex and loaded scenarios.

DoD/Aerospace Technology is the Solution

First let me address what is usually the immediate reaction upon hearing that DoD technology should be used. DoD does not have to deal with the same complexity as the commercial AV world. That belief is incorrect. The DoD autonomous ground vehicle folks not only have to deal with the same public domain and scenarios as the commercial side, but they must deal with vehicles driving off the roads on purpose, aircraft, folks shooting at each other, and electronic warfare. (That is where the enemy will try to jam, spoof, or overload sensors.) Trust me, the military has it much tougher.

This brings me to the resolution. The fact is that DoD has had the technology to resolve all these issues for well over a decade. And in most cases, like sensors, the target systems are far more complex than anything available in the AV domain today or probably will ever be. Proper and effective, not perfect, digital twins can be created for every model type needed here. And their real-time and federated model architectures can handle any scenario required, independent of complexity, model detail and math, or loading. Now having said this, clearly the effort here is not easy and will take a lot of work. This technology needs data to be tailored to meet the specific needs and targets in this industry. Keep in mind what we are talking about here is the impossible vs the possible; the doable vs the undoable. The current development and testing approaches are not remotely doable in many life times. This makes the value proposition of making the switch brutally obvious from a time, cost, and liability point of view.

(With regard to computing power needed, the architecture being used is so efficient and performs so well that this does not take any special computing assets. In most cases, it will run on the gaming type system being used now. This includes the ability to run much faster than real time when compared to systems that do not use the proper architecture.)

My Solution and Conflict of Interest

I have created a company, Dactle LLC, to utilize DoD/aerospace simulation technology and provide a complete solution. Dactle will provide all of the data, scenarios, and associated models and simulation needed to get to a legitimate L4/5: a full across-the-board-all-model-type digital twin. (With proof of course.) We won’t throw inadequate simulation or scenario tools over the wall and expect anyone to redirect huge quantities of your personnel and focus to set up and use these support tools. We will take care of the entire turn-key simulation and scenario solution so you can concentrate on the already insanely difficult task of building or validating an autonomous vehicle. If anyone one would like to see a demo, please let me know.

Now regarding the clear conflict of interest, It would seem that I am the only person in this industry who passionately pronounces what all the problems are and, by some miracle, has the only complete and accurate solution. Given that, pushback would seem warranted. However, allow me to give you a little history before you do.

When I started this journey a couple years ago, I was well aware that this conflict could exist. I tried to avoid that by reaching out to the simulation companies in this industry to make them aware of the issues and how to remedy them. Unfortunately, I ran in to two different responses. The IT/gaming/Silicon Valley folks ignored me. The vehicle manufacturing simulation folks paid attention but deferred by saying that when their customers figure out the flaws exist and pay for the fixes, they will redo their systems or make a new version. Keep in mind that customers will likely only know there is an issue when real-world tragedies occur. And that itself will probably require several tragedies before the coincidence is brought forward. As these responses were unacceptable, I decided to create a company and reach out to DoD/aerospace, find the right partner, and take this on myself.

(Why wouldn’t the simulation companies in this space upgrade their systems to use the best technology if they know their systems have significant capability gaps? They realize they would have to tell their customers, stakeholders, and financial backers that their technology is significantly flawed and would have to conduct a major rewrite of their entire system. Something they had no idea should be done, could be done, and know how to get it done. Then they would have to replace all the systems out there or sell a second version and maintain both of them perpetually.)

The other information I would like you to consider regarding my motives is my pedigree. I have always been demonstrably mission-focused. Particularly where it involves safety and security. That started with my being an anti-submarine warfare electronics technician in the U.S. Navy to becoming a communications officer and the leading communications engineer for the counter-terrorism group at the U.S. State Department. Then on to almost 15 years at Lockheed Martin where I worked in aircraft simulation, C4SIR, and the Aegis Weapon System as a systems engineer or project manager. And finally, as senior program manager and software engineering manager for all of NORAD after 9/11. Most specifically though, I was a post-9/11 whistleblower. As the C4ISR systems engineer for a major portion of the U.S. Coast Guard upgrade program called Deepwater, I raised several critical safety and security issues. This culminated in my being a lead witness at a congressional hearing, receiving the IEEE Barus Ethics Award, being featured on 60 Minutes, part of a documentary called “War on Whistleblowers,” and in several books on ethics. If you Google me and terms like IEEE Barus Ethics award, Deepwater, or even James Comey, you can see the story for yourself. (James Comey was Lockheed’s lead counsel in 2006).

The final thing I want to mention is that I am very aware of the interventionist approach as well as my pointing out flaws in competitors’ systems is not conventional approach. Unfortunately, I believe they are a necessary reaction to the history of how mankind often handles group thought or echo chambers where they intersect with safety, ego, arrogance, ignorance, and/or profit. If you look at history, you will see these environments are not broken because someone says please. Usually it takes an increasing, repeatable progression of tragedies, press coverage, public outrage, and laws or regulations to change the paradigm. At least seven people have died needlessly so far as human Guinea pigs in the safety driving process. I have no interest in there being any more, especially the first child or family. And let me double down on the word “many.” In order to develop these systems, machine learning must experience scenarios over and over to learn them. This means thousands of accident scenarios must be experienced hundreds if not thousands of times each to learn them, especially those that cannot be avoided. That means there will be thousands of injuries and casualties. This industry believes and has convinced the public that this is necessary to save more lives later. That is absolutely not true. In fact, the industry is doing the exact opposite of what they say their mission is. The process will never get close to resulting in a legitimate autonomous vehicle in most scenarios. That means the lives that technology would save will not be. And worst of all, we will be taking thousands of lives needlessly in an eternally failed effort, which is why I am so passionate about this subject.

Please find more information on my POV in my articles below, including why the use of public shadow and safety driving is untenable.

Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE

· https://medium.com/@imispgh/using-the-real-world-is-better-than-proper-simulation-for-autonomous-vehicle-development-nonsense-90cde4ccc0ce

Autonomous Vehicles Need to Have Accidents to Develop this Technology

· https://medium.com/@imispgh/autonomous-vehicles-need-to-have-accidents-to-develop-this-technology-2cc034abac9b

The Hype of Geofencing for Autonomous Vehicles

· https://medium.com/@imispgh/the-hype-of-geofencing-for-autonomous-vehicles-bd964cb14d16

SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving

· https://www.nxtbook.com/nxtbooks/sae/ave_201901/index.php

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

  • Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade