Trading places: Will robots take over the human workforce?

How AI is helping manufacturing companies find their optimal amounts of automation

Charles Costa
Nov 13 · 6 min read

Automation, robotics and artificial intelligence (AI). It’s a trio of fields that when combined, comprise the primary threat to the livelihood of manufacturing professionals. At least, that’s what many believe. It’s an understandable view since the field of robotics has come a long way since the introduction of Shakey — the world’s first robot that could reason about its own actions.

Today, robots are used extensively to augment human efforts involving complex tasks including surgical operations, diffusing explosive devices, and conducting traffic stops.

Despite the numerous benefits of robotics technologies, there’s still significant concern over the impact of the solutions on labor markets. In 2019, outlets such as Bloomberg reported that millions of jobs are going to be lost to automation, and others such as Fortune published articles stating that widespread robotics adoption will drive wages to zero.

BCG found that in 2015 a human welder costs $25/hour, whereas a robot only costs $8/hour (even after accounting for installation, maintenance, and operating expenses). They estimate that robots will only cost $2/hour to operate by 2030. Although robots appear to be a cost-effective alternative to hiring large numbers of humans, a closer look into the manufacturing industry shows that automation isn’t destroying the need for humans. Rather it’s providing an effective solution to augment manual labor.

Even with automation, humans are still relevant

A.T. Kearney, and Drishti (an SRI Ventures portfolio company) commissioned a report on automation in the manufacturing industry. The study found that humans still perform 72 percent of tasks within factories, and 71 percent of generated value comes from those efforts. Looking at the labor market as a whole, McKinsey reports that less than 5 percent of jobs can be fully automated, and that existing manufacturing labor shortages are only going to get worse as existing workers reach retirement age.

At the same time, however, humans are responsible for 68 percent of defects caused during the manufacturing process, according to the A.T. Kearney and Drishti report. Many of those errors are attributable to the inconsistent performance of humans. The saying that “nobody is perfect” especially rings true in the manufacturing sector. Human workers make mistakes, get fatigued and also need to take extensive precautions to stay safe in hazardous settings such as manufacturing plants.

Given the power of robots and the fact that their operational costs are significantly lower than those of humans, one is left to wonder why people still perform nearly three-quarters of all manufacturing tasks. There are a few reasons for this dynamic, one of which is the fact that unlike machines, humans can cope with the unexpected.

For example, factory workers notice broad inconsistencies and anomalies; they can spot out-of-position parts that interfere with an assembly. They also can reposition them to ensure the plant operates smoothly. Robots on the other hand are only able to overcome problems that they were programmed to address.

Another pitfall of manufacturing robots are that they require a variety of professionals (programmers, process engineers and skilled technicians) to ensure the machine functions properly. Contrary to popular belief, manufacturing plant operators can’t take a set it and forget it approach to managing machinery. The operators still routinely need to consult with operational experts to recalibrate the machinery as production cycles shift.

A middle ground between manual labor and complete automation

Although industry experts used to view business process automation as a zero-sum game, the current state of the manufacturing industry indicates otherwise. Most companies fall somewhere between relying entirely on manual labor, or leveraging lights-out manufacturing. Companies are finding a middle ground by leveraging innovations in the field of human-robot collaboration (HRC) — a discipline centered around enabling robots and humans to operate jointly to complete tasks.

A core element of effective HRC systems is the use of cobots to augment human workers. Cobots fall within the umbrella of “intelligent assist devices,” and were first introduced in 1994 by a team led by Prasad Akella who was a staff engineer at General Motors and is currently the founder of Drishti. In the paper titled, “Cobots for the automobile assembly line,” Akella and multiple co-authors discuss the design principles of human-machine interaction in industrial settings, and laid the groundwork for the devices used across industries today.

For more than two decades, cobots have been used successfully in a variety of settings to augment human labor, however in the absence of holistic and reliable operational data on humans, it’s been historically impossible to optimize overall assembly line performance that encompasses both robotic and human workers.

Many manufacturers still rely on manual time and motion studies that are driven by stopwatches and humans observing other humans. This introduces the issue of observation bias. This task alone on average consumes more than one-third of an engineer’s time.

Going back to the research report produced by A.T. Kearney and Drishti, a key point of their findings is that 71 percent of manufacturing professionals feel that time and motion studies are important, yet 43 percent are not confident in the data provided by the process.

Using AI and computer vision to augment human workers

One of the core differences between cobots and traditional manufacturing robots is the need for the devices to account for variations in the way’s humans operate in a manufacturing setting. This is done through the use of computer vision technology and artificial intelligence. These analysis systems are primarily comprised of the following components:

  • Sensor data collection: sensors collect raw data of a gesture.

While there are a variety of computer models used by cobots for gesture identification, two of the most popular models are the Kalman Filter (KF), and particle filter (PF) also known as Monte Carlo localization (MCL). KF is a real-time recursive algorithm that works by estimating measurement inaccuracies overtime. By estimating the joint probability distributions of variables for each time frame, the model is able to generate more accurate results than systems that rely on a single measurement alone.

MCL on the other hand, doesn’t make assumptions based on data gathered after the fact. Instead, the system is provided a map of the environment and the algorithm then estimates the position and orientation of the robot as it moves across and senses the area, by using particle filtering.

The paper, “Gesture recognition for human-robot collaboration: A review,” goes into these models, along with their strengths and weaknesses in more detail. Additionally, Cobot programming and development processes are discussed further in the paper “Cobot programming for collaborative industrial tasks: An overview.”

In addition to assisting workers on assembly lines, computer vision technology integrated into new forms, such as welding helmets, are helping to improve worker safety, streamline job training and improve overall efficiency.

Making sense of it all

Given the rapid pace of technological evolution, it’s understandable that many feel that robotics pose a significant threat to the livelihood of manufacturing professionals across the globe. Although some jobs are being eliminated due to shifts in the industry, humans still have significant advantages, such as an ability to process information beyond a specific immediate task.

That in mind, in order to ensure plants are operating at peak efficiency, manufacturers are now leveraging computer vision technologies and artificial intelligence to digitize actions performed by humans on the assembly line, for further analysis.

With data-backed, AI-driven computer vision solutions, supervisors can easily locate the source of a bottleneck, point to tasks that are being performed sub optimally and use instant replay video footage to train the operator or identify process optimization opportunities to engineers.

Although one can say that robots augmenting humans does make some positions obsolete, automation still isn’t a zero-sum game. By having robots handle hazardous and tedious tasks, humans are free to focus on manufacturing areas that require more skill.

The Dish

For people who want to make the world a safer, healthier and more productive place through innovation. Created and curated by the team at SRI International.

Charles Costa

Written by

Charles is the Content and Communications Manager for SRI International. You can learn more about him at CharlesCosta.net

The Dish

The Dish

For people who want to make the world a safer, healthier and more productive place through innovation. Created and curated by the team at SRI International.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade