Jake Brodsky
Nov 6 · 3 min read

tldr; This is a justification of why automation is not going to steal your jobs.

I am a registered professional engineer of control systems. I design the automation that you speak of. At the water utility where I worked for 30 years, I designed systems that made it possible to reduce the number of operators from roughly 300 to roughly 100 over that thirty years. However, the technical staff to back up those operators actually grew slightly.

Nevertheless, there is a limit to how few operators we can tolerate. The problem is that while automation is better than humans in many ways, it is far from perfect. An instrument can fail for a variety of reasons: there could be foam on the water where an ultrasonic level meter is used and this would cause the instrument to lose it’s echo. Equipment could get struck by lightning. There could be conditions that the software was not equipped to handle. Many things can screw up. We need those operators to keep things running when the automation fails.

THAT is why we still have human beings piloting aircraft. Autopilots don’t understand birds flying in to the engines. They don’t understand what happens when the air speed sensor, a pitot tube, gets iced over. It isn’t worth trying to automate unusual conditions like that because, well, they’re so unusual that it takes a sentient being to figure out what’s going on and react properly.

Good automation should augment what human beings perceive and understand. Good automation should have prioritized alerts that do not confuse people. Good automation should have clearly understood subsystems that can be disabled in case of malfunction. (Yes, I’m thinking of Boeing)

Artificial intelligence can provide some guidance as to what the machine understands, but it cannot be a substitute for human understanding of the systems. And if you don’t understand the things you’re manipulating to at least some basic level, you should not be at the controls.

As an aside, back in the early 1990s, there were devices known as fuzzy logic controllers. They were self-tuning controllers for things such as a valve actuator. These controllers could take a non-linear response and adapt to them well enough to run things quite efficiently.

The problem was that the people using them didn’t know if the controller or the actuators were working properly or not. A commonplace Proportional-Integral-Derivative (PID) Controller needs to be tuned. Often, process problems are revealed when the behavior while tuning a PID controller didn’t match what the controls engineers designed for and expected. So the fuzzy logic controllers aren’t commonly seen on the market any more. Sure, they can reduce the work of an engineer during startup. The problem is that now NOBODY is sure if the process is behaving in a manner that it was designed for. Good Control Systems should be predictable and do exactly what was expected of them.

The bottom line is that automation isn’t likely to replace humanity any time soon. The market is growing fast. We’re producing more with fewer people, but it frees those people to do other things like create better, newer, more optimized products. It frees them to market what they have created. It frees them to consider the ethics and efficacy of what they have made.

Automation is freedom to create new things better than ever before. Why is that a bad thing?

    Jake Brodsky

    Written by

    I am one of those right wing conservative, married white fathers. Happy, not angry; armed, not dangerous; educated, but always a student.