Automation of Essential Services — Part 1: Introduction
No one could have predicted the magnitude of the disruption caused by COVID-19 and the resulting shutdown of the global economy. While we are still scrambling to understand the biological aspects of the pandemic, the more direct impact may be the untallied economic damages of the Great Pause. Some estimates predict that economic recovery could stretch out for years to come. This disruption is the stuff of science fiction, the global workforce sent home and production halted in all but a few key industries. Except for the tasks that can be done remotely or those deemed essential, most of the world labor force cannot work.
For those that are considered Essential Workers or Essential Businesses, they have the right to continue working during the lockdown, providing services necessary to keep the fabric of society together. But while some are permitted to keep working, others, such as medical staff and grocery workers are more truthfully being forced to. I have a number of friends that work in essential retail, grocery, or Amazon, that do not have a choice to stay home even to protect themselves. They must go in daily to make sure the flow of goods continues.
For this reason, I see a future where we MUST automate more of our essential services, and we MUST bring more critical manufacturing into local supply chains. These two moves will allow us the capabilities to respond more robustly to such emergencies, and to keep our economy on its feet even in the face of massive labor shortages and supply disruptions. I’m talking about making our supply chains more robust. Not just stronger, but also more resilient to disruptions of this type and to others that we have not yet encountered or anticipated. Natural disasters do not need to be as threatening to our societal wellbeing if we have a web of robust automated manufacturing shoring up the foundations of our most essential services; namely, biological sampling, material analysis, food production, material delivery, sanitation, and production.
In this next series of articles I will lay out a few areas that I see as critical for helping us weather societal disruptions, and that are also ripe for increased automation. We will look at the burgeoning field of life science laboratory automation, covering everything from sample collection, diagnostic testing, drug/vaccine discovery and screening, all the way to robotic surgery. We will then look at the history, present, and future of automated agriculture, and how our food production networks can be hardened against labor shortages and biological contamination.
And then we will explore the ways that manufacturing has always been moving toward greater automation, and why it is still not yet far enough along to truly secure our critical supply chains. Today dozens of essential functions in almost every area of manufacturing and production still require local in-person human interaction; and I think we will see a large move to change this in the coming decade. Decentralizing manufacturing is likely an important first step. Allowing single-source (or single country) supply of essential goods is a major vulnerability in any company’s supply chain. And if labor in your country is too expensive to compete with offshore labor, then one solution is to make more machines and robots that can work cost-competitively with humans, and often with higher quality output (even if some tradeoffs must be made in the process). We can also restore capability for makerspaces and small shops to be able to work autonomously or rapidly change over to making products of importance; because automation does not have to mean robots, it can also mean autonomy of human action. This article is about giving people the tools, resources, and space they need to get important things done even in the face of global disruption.
So let’s talk about some of the crucial factors that must be considered when automating a system, or when investing in the companies that will bring this automation to market.
Considerations in any automated system
Reliability
Reliability is a measure of the ratio of time in an instrument’s life when it can be expected to function as designed and when desired. Essentially, does it work the way you want, when you want it to? A closely related concept is availability, which measures whether a system is able to operate when it is supposed to be able to.
Everyone wants a reliable system that never breaks and never fails to do the task it was designed to do. But reliability comes at a steep cost in hardware design, and is one of many factors that must be balanced when designing a system or production line.
Let’s consider an example of a robotic inspection camera on a conveyor belt, tasked with identifying correct hole spacing on some molded plastic shovels passing by on the belt below. This robot might only need to be 95% accurate at identifying a faulty part as it goes by, which might match a human worker performing the same task. So out of 20 failed parts, 19 get correctly identified, and 1 bad part slips through. But that might be okay, since downstream there is a process step where the handle is aligned and attached to those holes. That step is also likely to catch the erroneous part. And even if it didn’t, the consequences for a failed shovel coming off the line are fairly low. As such, this camera system can probably be quite cheap and relatively simple.
Conversely, consider what might be needed for a component going into a life-critical medical device such as a respirator, which may need to correctly analyze a failed component to 99.9999% reliability. In fact the Six Sigma methodology that is popular in name actually has its roots in reliability testing. Compliance with Six Sigma manufacturing indicates reliability down to approximately 1–3 defects per one million units produced, though the methodology extends beyond just reliability testing.
Another closely related measure would be utilization or usage-factor, which accounts for how often a machine is actually in service in a time period. This takes into account human operators, work-shifts (nights and weekends), and material supply shortages in addition to down-time that might be necessary for maintenance and repair.
Right now in Covid-19 lockdown, we are experiencing a very low utilization rate on our workforce, with most of our machines sitting idle while the human operators are stuck at home unengaged. Meanwhile the reliability of our supply network is being put directly to the test, especially in the early days of the pandemic, with stretched supply-lines for such products as N95 face masks, hand sanitizer, ventilators and test kits. In truth, our supply chains for many items have broken, even though we have not yet begun to feel the impacts. A vast array of other products are dwindling in supply as we draw down stockpiles that most of us have no transparency to. I predict shortages on many unexpected products in the coming months; products that are not being made today without people. Meanwhile many other products with mature automated production lines will see no long-term shortages no matter how long the lockdown might remain in effect.
Throughput and Yield
Throughput is how much output can be expected per unit of time. Kilograms per day, or units per month are common throughput measures for material production and finished product assembly. Lab throughput could be described as how many tests can be run or analyzed in a day, or a week.
Yield is typically used to describe the proportion of output product that meets functional specifications. For a process in steady-state with a 95% yield, 95 units out of 100 that roll off the end of the line can be expected to be acceptable. It’s often insightful to ask what happens with the failed quantities. Can they be recycled, or does this go on the books as total loss, and thus amortized into the price of the other 95 salable units?
For production instruments and automated equipment it can be very costly to troubleshoot a failing device. So it can be tempting for shops to keep equipment in operation even as yield drops, expecting the downstream quality-control department (either human or machine inspection) to catch the failing componentry and keep it out of the finished product. And of course this introduces another failure mode into the complex system that is the production chain. Yield of the finished product is affected by the yield of all subproducts that go into it. So if QC misses some faulty componentry, then in the next step a good component is attached to the bad, and the whole assembly is potentially lost.
Inspection, Quality Control
Data-gathering is a major part of almost any processing line. Information about the health of a production line is gathered through measurement and inspection of intermediate and final parts. And then that information is used to tweak system parameters to improve yield or quality, or any number of other parameters. This part of the process is a great place for automation, with camera hardware and machine vision algorithms becoming ubiquitous, though careful customization and integration is usually required for setup. And even something as simple as the lighting in the area of the camera can make or break one of these setups. Will sunlight stream through a window during a certain time of day and illuminate the part in a way that throws off the camera? What if that only happens during a certain time of the year? Or will a human passing by the machine cast a shadow on the workpiece from a particular overhead bulb? Many considerations need to be made to how consistently a part can be illuminated, such that the camera records a meaningful image and can deliver accurate feedback every time. But once lighting and illumination is worked out, and the algorithms fine-tuned, that camera system can function 24 hours a day, for years on end, catching details in milliseconds that the human eye would not be able to detect at all.
Material Uniformity
When designing and running automation equipment, uniformity is king. If the material being handled is all the same size, shape, weight, texture, firmness, you-name-it, then it’s much easier to design a machine that can pick up, convey, separate, assemble, inspect or perform any number of other repetitive operations on it. If it’s easier to design a system then it’s probably easier to build and maintain it also, and this forever impacts the COGS and thus the price of the output product.
So in this modern age many more parameters need to be optimized for than just speed, cost, yield, or throughput. Of course, the holy grail is optimizing for all of these parameters, but the truth is that engineering is basically the art of managing tradeoffs. For this reason, handling irregular materials or custom shapes will always be more expensive, with machine complexity needing to scale up dramatically to handle the irregularities of the material.
But machine design is not the only way to improve an automated process. Much can be gained from a more controlled, repeatable, predictable input material. This is true in every industry from textile threads, to lab reagents, to ears of corn. Great amounts of engineering effort and investment goes into improving intermediate materials. Perhaps just as much effort goes into the materials as goes into the equipment designed to handle and process it. This doesn’t necessarily mean the materials themselves need to be of higher quality; as long as an operator or machine can predict and calibrate for the actual properties of the material, then the automation equipment can be configured to run with it. And the easier it is to configure a machine to handle a certain material, then the simpler and more robust the machine and the process line can be.
It’s also worth pointing out that the end product of just about any process quickly becomes the input material to someone else’s process. The more uniform the widget that comes off of Company A’s production line, the easier of a time the customer will have incorporating that widget into their own automated processes.
Dealing with non-Homogeneity
Of course, machines can be built to handle even the most irregular of shapes and materials, but the complexity of such a machine often goes up by 2x or more for every extra degree of variability that it must handle. Consider that for every sensor that exists in the world, there is a material that was meant to be sensed. From moisture, temperature, thickness, reflectivity, weight, porosity, to any number of other physical properties, there are sensors available (for a price) to measure it. And instrumentation designers will put as many sensors as are needed into a system to give it the intelligence it needs to perform its function.
Imagine the metal foil that arrives at a battery plant in giant rolls, ready to be cut and folded into batteries. If that foil varies in thickness across the roll, then a sensor will need to be placed in the system to measure the thickness of the foil as it is pulled into the machinery, and take some action to compensate. That sensor adds cost, and now exists as a point of failure on the production chain. If that sensor fails, the whole system goes down until it is fixed. The way around this is by making sure that the metal foil is guaranteed to be of very uniform thickness before it ever leaves its supplier. In this way the thickness sensor can be skipped altogether, saving cost and increasing the reliability of the battery production line by having one less failure mode; one less chain link to break.
In commercial-scale production, where volume is high and prices are low, it’s very important to keep equipment costs down and robustness/reliability up. The equipment has to work as expected or it won’t get used. For this reason there will always be a strong push to reduce the number of sensors or actuators to the smallest number possible. This reduction in complexity over time is one of the prime indicators of the maturity of a technology.
Operation Mode
To set us up for discussing automation across industries in later articles, I also want to outline the different operation modes of an autonomous system, corresponding to how much feedback data a system receives from different points in its process, and how much of a role humans directly play in carrying out the process.
Open-Loop
An open loop is not a loop. Put simply, commands and materials go in, but the machine has no awareness of what comes out. There is no feedback mechanism, or no way for inspection data to loop back into the process to improve any of the steps. This might also be called dead-reckoning, in which the machine does X for Y seconds and then goes on to the next step regardless of the outcome of the previous step; or really without even knowing if the previous step happened or finished yet. No system being monitored by a human is ever truly open-loop, since we are always using our onboard sensors for taking in sight, hearing, and pressure/vibration information.
Closed-Loop
Closed-loop operation describes when there is a feedback mechanism or data pathway for information about the final product to loop back into the process, so the process can be iteratively improved or self-corrected. Most automated processes would be run this way, where the operator or designer tunes the final product, rather than the instrument itself. Well designed machines can run fully autonomously in this mode, often using machine learning or AI principles to adjust itself to give some desired outcome.
Supervisory Control
Also known as Human-in-the-Loop, this describes a system that uses automated action to control most of the details of a process, with a human taking over control at certain critical steps, or making decisions at critical junctures. The human operator controls the task, rather than controlling the specific individual actuators of a system. It is often characterized by language-like interfacing, or menu-like interfacing with a computer handling the mechanical actions of a machine system. This could be performed locally with the operator standing at a terminal on the machine; or just as easily with a remote operator, taking in remote sensor data and perhaps camera feed.
A useful variation on this is described as telepresence, or the ability for a remote operator to call in, monitor, and adjust the functions of an automated system. The operator might tweak some system parameters to keep the machine humming along, or perhaps even take over direct control to correct some issue like a stuck part, or maybe to bypass a problematic sensor until it can be checked in person. This is perhaps the most valuable form of human-machine collaboration for periods such as this Great Pause. This would enable machinery to run day and night, only calling out for human assistance by alarm, warning lights, or even SMS text to the operator or technician if a problem is detected by one of its many sensors.
Direct Control
An extreme form of Human-in-the-loop would be when a human is indeed controlling every motion that a machine undertakes. This would be a case where the machine does nothing without direct command by the human operator. An example of this would be a surgical robot; with cameras and sensors feeding information to a remote human doctor, who then commands each motion of the robotic arm by manipulating the buttons and interfaces on the surgeon-side terminal. Maybe the doctor has a joystick of sorts, or perhaps even a glove that converts the motions of the doctor’s hands into movement of motors on the robotic arm. This type of operation could also be done with language-like commands, but this would be impractical in most applications requiring real-time control.
Conclusion
In the next articles in this series we will look at lab automation in life-science (with a focus on testing and COVID-related activities), agriculture and food production, transport and delivery of goods, and finally an in-depth look at the future of automated manufacturing. We now have the basic concepts needed to begin exploring automated production across many industries. We want to maintain high throughput by increasing yield and focusing on reliability to maximize uptime. Keeping system complexity low is often the best way to maintain high reliability; and one way to reduce system complexity is to use more consistent, predictable, uniform input materials. When materials can’t be well controlled, we add sensors to compensate; but sensors fail all the time; and any one of them could bring down the whole line. A focus on robustness and resiliency is key in any value chain, from production lines to supply chains.
Machines running scripted programs give us the ability to be productive even while we are sleeping. But I can’t stress this enough, robots are not going to take all of our jobs. We have been automating processes in agriculture and manufacturing for literally hundreds of years, and yet those sectors are still two of the largest employers on the planet. Robots and machines are built to perform the jobs that humans don’t want, or that we aren’t well suited for. By wielding machinery we can greatly enhance our own abilities and improve our outcomes. And as a systems engineer that has built dozens of automated systems across many industries from mining to lab automation I can say this: it is not easy to keep a production robot running. It takes a mindshare of human expertise to build, program, install, customize, calibrate, modify, retrofit, and repair these instruments over time. There will be plenty of work to go around; though like the rest of the world the nature of that work might look a bit different from what it looks like today. Stay tuned for the rest of this series and I’ll show you what I mean.
Prime Movers Lab invests in breakthrough scientific startups founded by Prime Movers, the inventors who transform billions of lives. We invest in seed-stage companies reinventing energy, transportation, infrastructure, manufacturing, human augmentation and computing
Sign up here if you are not already subscribed to our blog.