How Edge Computing Increases Manufacturing Flexibility, Agility, and Cost-Effectiveness
By Salvatore Salamone (Sponsored by Intel)
RTInsights and Intel discuss the promise, challenges, and progress of Edge Computing in manufacturing.
Manufacturers are seeking to optimize industrial processes, increase yields, reduce defects, and improve quality using Edge Computing. Specifically, they want to use sophisticated analytics, machine learning, and artificial intelligence at the edge to gain insights from the large volumes of data generated throughout their production lines and facilities.
To get a better understanding of benefits, challenges, and progress of Edge Computing in manufacturing, we recently sat down with Jonathan Luse, General Manager, Industrial Solutions Management at Intel, and Ricky Watts, Industrial Solutions Senior Director at Intel. Here is a summary of our conversation.
Edge Computing’s Promise in Manufacturing
RTInsights: What is the promise of Edge Computing in manufacturing?
Luse: With the explosion of data being generated in manufacturing facilities, it’s important to make better business, quality, and production decisions based on that data. And as a result, you need a location where you can go ahead and perform analytics, insights, and data crunching, near or in the facility. We call that the Edge as opposed to the cloud. The reason we have it in the Edge, on-premises in many cases, versus in a cloud environment, is sometimes because of security, IP needs to be physically retained in one area, or to make good manufacturing decisions, computing has to be close to the data source.
And when you have latency dependencies, it’s important to have an Edge Computing capability to be able to get all the benefits of data analytics and big data analysis in a way that protects the security of the data itself. Sometimes, the economics of doing computing in the cloud versus the Edge favors having your local computing down there. So, the promise of Edge Computing at the end of the day is all about the insights and business and technical perspectives that you can get from analyzing all the manufacturing data that is available. Whether it’s quality, throughput, security, systems safety, or business insights about the production economics, that’s the promise of what Edge Computing can bring above and beyond what the standard manufacturing is doing.
Edge Computing Deployment Challenges
RTInsights: What are the challenges in deploying and using Edge Computing?
Luse: The technology can sometimes be finicky, and sometimes it can be complex. Some decisions need to be made when deploying a computing platform or a system, including how much distributed computing versus consolidated computing you want to do. And so, the challenge is where do you put your horsepower for your computing? You can put it very, very close to the device. You can centralize it at the Edge. You can push it to the cloud. There’s no wrong answer there, per se. But the challenge is that sometimes you don’t know what the answer needs to be. And so, one of the challenges of deploying and using Edge Computing is developing an architecture that allows you to have the flexibility to move computing workloads up to the cloud, back to the Edge, down to the device, or maybe load balance across all three.
Ultimately, the challenge in deploying Edge Computing is around how you build an infrastructure to do that easily? One of the things that companies like Intel do when we deploy our architectures, we deploy it with scalability and agility in mind to move computing workloads easily across devices. Such an approach helps alleviate some of the challenges in Edge Computing deployment.
Watts: Some of the challenges are environmental, as well. Many of the facilities that exist at the Edge of the network are not like a data center. You don’t have air conditioning, and devices often are working out on a factory floor where it’s dusty, and there are severe environmental conditions. Some of the Edge compute platforms in the industry ignore these issues. Edge Computing solutions must be certified against environmental conditions, whether it’s vibration, heat, or other things.
There also are many physical challenges that we must overcome. We must make sure our platforms at the Edge are fit for the environment that they’ve got to work in. That’s one aspect. The other thing is Edge Compute platforms have a lifecycle that’s quite different from the cloud. They’re often working in these environments where they must be there for years. And therefore, you must look at the environment they work in and the length of time that they’ve got to be in service.
Many Edge Compute platforms are there for more than five, six, seven years. Typical cloud platforms have a much shorter lifespan than that. So, the challenges of deploying Edge Computing are environmental and lifecycle issues. You’ve got to brace yourself for these lifecycle models when you’re putting these things. That’s in addition to the things that Jonathan mentioned in terms of the complexity of the Edge.
Enabling Technologies for Success
RTInsights: What are the key enabling technologies needed for success?
Luse: Several enabling technologies are important. When you get into the data itself, the preservation of time determinism and predictable latencies for time. All data is time-sensitive, but some of it is incredibly time-sensitive. When you’re doing production in a manufacturing plant, you’re using the Edge Computing to help augment the manufacturing output or the quality. It also may be tied into the manufacturing processes themselves. Having a strong time-sensitive series of technologies available for data to be brought to the Edge, massaged, computed, outputted, and sent back down in a very predictable time is a critical foundational element of what I would say is Edge Computing for manufacturing. That’s somewhat unique to the manufacturing industry, not all businesses that have Edge Computing have the quantity of data that’s time-sensitive.
The other thing is related to the economics of setting up and managing workloads. We’ve got hundreds of different computing and control workloads plus data management and analysis workloads in manufacturing. And supervisory control and data acquisition activities is going on simultaneously. Historically many of those have been discrete systems, as in physically discrete from each other. The economics of that are just becoming very cumbersome, especially when compared to the enterprise-class capabilities in the cloud and the data center. What you’d see in the traditional OT [operational technology] environment has been to physically separate each workload from one another and then run each as an independent part of the system.
Within the Edge, there’s a phenomenon around workload consolidation or hyper-convergence of applications. That then allows the economics of the enterprise to start to enter OT so that the IT economics start to come into play. If you want to accelerate that, you need the ability to manage some workloads in that time-sensitive space and some in the traditional computing space. So, the proliferation of time-sensitivity with virtualization is a critical technology that enables Edge success and the blending of that IT and OT environment at the Edge. And then, of course, the other thing you’ve got to take care of is the manageability and orchestration of security and device onboarding an application.
I would say those three key technologies — time-sensitive technologies, virtualization with time-sensitivity, and orchestration and manageability — would be the ones I would highlight.
Watts: Whether it’s a technology enabler is probably a debate, but when you’re moving a lot of things to the Edge of the network, one of the things we’re looking for is how we can do that in a standardized way to allow for the interoperability. A key technology is building a uniform language for devices to talk through applications. In the industrial domain, a lot of that effort’s been around a technology called OPC Unified Architecture (UA), which is basically a language that allows devices to talk through an Edge compute platform to an application.
On the software side, we must provide a framework for things to talk to each other. If I get six people in a room and we all speak a different language, we can all make a lot of noise, but we don’t understand each other. You need to talk through a universal translator. Here, the uniform language would be OPC UA. It means that we can all talk and communicate with each other. Such enabling technology at the Edge of the network is important. As we move towards Industry 4.0, it is critical that things can talk to each other.
A fundamental thing that we’ve got to do is enable that uniform language, that uniform connectivity, for things to talk to things.
Edge Computing Adoption and Use Cases
RTInsights: Where’s the industry and adoption use of Edge Computing?
Luse: Let’s start at a high-level and work down into some examples of use cases covering both discrete and continuous manufacturing processes. Continuous processes include energy production, energy transmission, or oil and gas production. Discreet processes can be found in automotive, paper and pulp, and other industries.
Edge Computing adoption is happening across the industry in both continuous and discrete processes. We’re seeing it in applications and use cases such as inspection analytics, where you would see a manufacturing line checks to see if a label is properly attached to a bottle. We’ve seen situations where visual inspection is replaced by analytics with machine vision. And when you see some of those types of things happening, you see dramatic improvements in detection rate and quality.
We’re seeing Edge Computing used in machine vision applications for defect detection in a production line. It might be used to look for defects in a bottle, if a label is affixed properly, whether there’s a defect in a textile mill, or to analyze the quality of a well.
There are all kinds of different use cases in place where we’ve seen the successful use of Edge Computing and Edge insights of that data for quality improvement, predictive maintenance, and other production benefits.
Watts: The industry has already been using Edge Computing for many, many years. We have a huge business around industrial PCs, which are sitting out there at the network’s Edge. As we move to Industry 4.0, those Edge computers that were designed for doing specific tasks in the past are now being replaced and becoming much more powerful with modern silicon, with modern capabilities of the X86 architecture. Putting more power at the Edge lets manufacturers do what Jonathan talked about, like analytics at the Edge and computer vision.
Those key technologies that we talked about earlier enable these platforms to become and do much more, while running the legacy workloads that they’ve been running in the past. With intelligence at the Edge, we can evolve by bringing a new paradigm and a new shift into what they’ve had before and expanding the art of what’s possible, on top of these new platforms.
It’s about taking those legacy platforms, updating them, modifying them, and then becoming what I call industrial for Edge Compute platforms instead of a legacy Edge Compute platform.
Originally published at https://www.rtinsights.com.