In Simulations We Trust…

Vadim Pinskiy
3 min readFeb 10, 2020

By Vadim Pinskiy, VP of Research and Development at Nanotronics

Source: Paul Demery

Over the last decade, Process Control and System Design, once the pinnacles of innovation have drifted into the category of mature fields with less disruptive improvements. This phenomenon is just one of the many factors that makes the Boeing 737 Max tragedy so compelling.

I won’t investigate the Boeing 737 Max problems here. My point is to highlight how established digital simulators can miss complex design and development problems. Though most evident in the Boeing crash, parallels exist in many other scenarios. Digital simulators are invaluable to modern process control, but they don’t solve all problems. Simulators can be fooled and most dangerously, cause designers to overlook established truths to gain small increases in efficiency, risking major error. Many technical issues are caused by this problematic mindset.

With some certainty, it can be said that the overall goal of the engineers was not to do harm. The goal was to make an efficient airplane in a competitive market. Boeing engineers obviously knew the dangers of miscalculation and had systems in place to oversee the verification of control and design decisions…. But have they lost the bottom-up design intuition that once fueled engineering companies like Boeing?

This intuition was the design balance that minimized operational risk: the designer knew that certain things simply would not work or would be too difficult to try. Digital approaches can completely model a physical system. But intuition is one of the last barriers to design digitization. As the role of Artificial Intelligence and Deep Learning increase in the design and validation of complex parts, this weakness must be taken into account. Newer AI systems for design need to be integrated with bottom-up sensors and human feedback. They must incorporate training data for the AI to build hard-coded rules that cannot to be violated for any optimization.

The construction of ground-truth leveled barriers for AI-based optimization is essential for improved process control at minimal risk. Any designed system is created for human usage or operation. Therefore, the system must include human behavior as a driving component. Human behavior, a mystery to engineers, is an even bigger mystery to AI and digital simulators. Models excel at predicting known environmental variables derived from existing clinical systems. However, for optimal design, we need to understand human-based decision and reactions.

The classic assembly line example is the emergency stop (E-Stop). These red buttons are required by law on all assembly lines and governed by OSHA. The exact placement of these buttons and their positioning is often decided by the process and design engineers. Would an AI-modified, optimized system place them in the same location? The general answer is a firm no.

E-stop distribution — as a function of average human arm-reach — can be programmed and calculated by AI for the simulation of assembly lines. But AI would have no way of predicting the actions that humans are likely to take. The central source of industrial accidents are cases where the operator deviated from the norm or the process deviated from the norm, which is why human-response data and decision-making is essential for safe product design. There simply is no substitute to process-specific video-assembly data of human interaction and procedures.

The outcome of the Boeing 737 investigation is far from certain. Hopefully it will reinforce the need for pilot behavior data to be directly coupled into any future airplane designs and releases.

--

--

Vadim Pinskiy

Vadim Pinskiy is Vice President of R&D at Nanotronics. He is interested in bringing innovative solutions to complex robotic and data problems.