Industry of Things in Datapao

Cagdas Yetkin
DATAPAO
Published in
10 min readJun 5, 2018

I spent the past 5 years working on Lean and Six Sigma improvement projects. I was lucky enough to work in both manufacturing and service delivery problems at global scale. I always had my focus on reducing average handling time, improving accuracy and delivering year over year value to our clients. Considering the digital landscape and our capabilities, I thought I was on the right track. Well, surprisingly, these were not enough.

Today the conditions are forcing us to explore new horizons and Industry 4.0 is one of them. When I ask what this is all about Zoltan Toth simply replies: “Well, the times of sitting in front of SCADA and checking some color indicators is over. You can’t stay competitive if you are just reactive”.

a SCADA image from the Lost, a popular TV series from the last decade

If you want to see a list of where Industry 4.0 is taking us then you can visit our previous post.

The solutions we engineer and implement in Datapao are becoming the Internet of Really Important Things day after day. I am talking about some smart systems which can affect the energy consumption at the macroeconomic scale.

Historically we have come a long way:

These days the goal is to have optimized production capabilities and proactive data analysis with richer insight for decision-making in an automated fashion than currently possible.

OK, but how? We have four big clusters in this big picture:

1) Intelligent assets

2) Data communications infrastructure

3) Analytics and applications for reactive, preventive and predictive actions

4) People for making decisions

The intersection of these 4 clusters is a new way of Condition Monitoring working hand to hand with continuous improvement. However, when we think about condition monitoring we don’t see it like a magic wand but an integration into Total Productive Maintenance (TPM) and Reliability Centered Maintenance (RCM) frameworks. This involves training your machine-people and empower them to set up their own programs.

For example, getting sensor data, processing it and delivering a sophisticated vibration analysis isn’t the answer for everything. You have to define your requirements and that includes understanding the value of performing Predictive Maintenance tasks — are they justifiable against the desired goal? Make sure you are not going into an analysis paralysis when the goal is to reduce energy consumption in a short period of time.

Some of the solutions are inspired by the industry reports claiming that the 80% of the maintenance of the machines is unplanned. Its outcome is 3 to 4 times more cost!

You can say that in order to become more proactive I can prefer the critical part of the factory analyzed with one to three months cycle. This will also come with a high cost. You need devoted staff, data collection (such as vibration data). But, it can be quite some time from when the trends start to manifest itself until alarm bells start ringing. This means some failures, or costly repairs will still occur. So this is again reactive.

Instead, we propose:

■ Data: Connecting devices

■ What: Monitoring them

■ Why: to Analyze

■ When: Predict when something will happen

■ What if: Optimize costs and production

We equip your machines with sensors. Information is relayed to the central collection box which accumulates data. This relays the data via the internet to a dedicated computer or server. At that point, the data can be analyzed every 5 seconds. Change in trend is messaged to the workers or engineers whenever it occurs. Just like Bosch did in 11 factories as a kickstart:

Bosch has more than 100 sensors on each machine across factories in different locations. Atop each machine a stoplight showing its efficiency status. All kinds of data from electricity to compressed air. These are stored in the data center in Stuttgart.

Vibration is not the only one. Motor current and thermal data also can reveal the machine health. Unbalanced loads produce minute disturbances in the current to the motor. There are successful research results which apply Neural Nets to this data and get promising results.

Power monitoring can detect pump, engine or motor problems. By monitoring power delivered to the pumps, we can determine when problems like worn bearings, misaligned couplings or loose foundations occur. Just like the PlantOne solution from NGS is doing:

Electric motors are one of the main assets in manufacturing companies. Their faults can generate several problems in the industrial processes and this product has the promise of a quick jump into predictive maintenance. The scope is larger thou. The main benefits are low repairing costs, reduced downtime, increased energy efficiency, savings in employees’ time.

The system has four main components: (i) sensor devices (temperature and vibration), (ii) gateway, (iii) Remote Control and Service Room (RCSR), and (iv) Open Platform Communications (OPC) server. Packet Loss Rate is monitored constantly. Gateway and router installations take more time than the others.

It is trivial to program a threshold remotely, and if the detected temperature is bigger than the threshold, the sampling rate is auto increased and events are sent upstream. These are all evaluated by using moving average windows. It is also possible to convert the data from time domain to frequency domain.

Vibration analysis has its use case also in Ball Bearing which is used in all rotating machinery. And its movement dynamic will contribute to overall vibration in the machine. The idea is that if there are defects on the surface, they can be identified using frequency analysis of vibration. Just like in this example:

Here the peak vibration amplitude has an increasing trend which is ringing the alarm bells. FFT (Fast Fourier Transform) technique is applied to convert from the time domain to frequency domain.

In Datapao, we aim at simplifying the factory upkeep to create value for sustainable operations.

A simplified data architecture can be envisioned like this:

Please note that this is only a PoC diagram. We don’t use Arduino in industrial scale.

We have open source and traditional applications to make this flow run such as Easy-IoT, Kaa, Microsoft Azure IoT suite, PTC ThingWorx. Most of the time we have the data format in JSON.

This system will send warning emails when the temperature is greater than or equal to 27.

We can make it a bit more complex by adding layers like a Machine Learning module:

The predictive maintenance powered by ML typically decreases the total machine downtime (like changeover time) from 30% to 50% and extends the operating life from 20% to 40% as well as reducing the maintenance cost from 10% to 40%. We have got the best results from deep neural nets until now.

We don’t need to limit ourselves to numeric variables when talking about ML applications. The sensors can collect also infrared images from different hotspots:

I think you can already imagine where this is going. Unbalanced current, minor cracks in insulators, contact problems, and increases and decreases in voltage levels and other similar related issues have an effect on the thermal footprint of the machinery.

Here we are detecting the increase of internal temperature in electrical instruments (at early stages) proactively by using computer-vision (multi-layered perceptron) and achieving a test set performance of 84%, detecting the defects. The solution can save cost in repair and outages.

The sensor data is not the only type of data we need to make these kinds of cyber-physical systems work. We also need manual data registrations and cost data to make it smarter.

For that, we need the help of Finance Departments to create the cost data to include them in our computations. These indicators should look like this:

..so that we can understand the expected cost of a machine failure or change over time.

At the end of the last decision level, the cyber-physical system either provides a digital advice or automatically controls the maintenance through self-maintenance.

Our maturity model should look like this as we progress:

At the highest level, we create a digital twin to understand why something is happening.

Now before closing this chapter, we will look into another IoT architecture which gained attraction after it was deployed and got good results for Madrid city traffic monitoring office.

The sensors are sending data regarding the average traffic speed and congestion (as the number of vehicles per hour). Near real-time evaluation is needed to report bad traffic and congestion. Esper CEP does it on the fly by using the pre-setup rules.

In recent years CEP + ML (a hybrid approach came into existence). Finding the optimum size for training window by exploiting components of time series data will be critical here. The error introduced by our prediction algorithm will also be taken into consideration while the error progresses through the CEP.

Node-Red is front-end interface which is an open source visual tool.

Esper stores the queries and runs data through these queries. Not the other way around which stores the data first. It is using Event Processing Language (EPL). It is SQL like.. So the commands we use are still valid here: SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY etc. The CEP engine is the core which evaluates the matching patterns.

Adaptive Moving Window Regression (AMWR): based on the moving window when the new data arrives, it calculates an error and re-trains the model. Lomb Scargle method has been used to determine the optimum time window size. It is adaptive in nature. The size of the prediction window or forecast horizon is also adaptive. AMWR flow is as follows:

Size of the adaptive prediction horizon: The basic idea is to increase this window when we are more accurate and decrease it if the performance is going down:

and a CEP rule can be written like this:

Basically, it is generating an event when the average speed and average traffic flow is less than a value for 3 consecutive readings. Now if the input is predicted data, then the event will be in the future on which the authorities can take action. Inputs are predicted data, remember the architecture above.

Lomb Scargle method outputs the optimum training window to be 15 samples. When we use 15, we are getting the minimum MAPE.

The surprising result is that the predictions are following the actual data points almost identically. The reason behind it is that if there is an error in the predictions, it is incorporated and the model is updated accordingly and hence it prevents the error from propagating. There are 2 reasons for the high accuracy:

1) As new data arrive, AMWR takes the prediction error into account and retrain the model using more recent data. As the size of training window is very small, it is able to retrain and predict in near real-time.

2) It tracks the error and if prediction error starts to increase, it decreases the size of prediction window in order to maintain the level of accuracy.

AMWR, linear regression, CART, Random Forest, SVM(with RBF kernel mentioned above) are all compared. The best performer is AMWR (and this AMWR is based on SVR by the way). Conventional classic SVR cant perform better than AMWR based SVR. Here is the related comparison:

Here the average traffic speed is suddenly dropping to zero. AMWR can follow it precisely but classic SVR can not.

In this first chapter, we have seen how physical and digital worlds are connected. In Datapao we believe that it is a big deal. A June 2015 McKinsey report, “Unlocking the Potential of the Internet of Things,” suggests that the “IoT has a total potential economic impact of $3.9 trillion to $11.1 trillion a year by 2025. That would be equivalent to about 11 percent of the world economy.” Similarly, Cisco estimates that the number of connected devices worldwide will double from 25 billion in 2015 to 50 billion in 2020.

Finally, we should add that today’s predictive maintenance tools are mostly single-point solutions, like vibration and anomaly detection for turbines or combustion or emissions monitoring. A true Industrial IoT would allow a factory manager or an asset owner to see how the whole system is working.

Stay interconnected, share and see you next time…

resources used: 1) Application of IoT concept on predictive maintenance of industrial equipment ( Radu Constantin Parpala1, and Robert Iacob) 2) Deep digital maintenance ( Harald Rødseth • Per Schjølberg1 • Andreas Marhaug) 3) Experimental Investigation for Distributed Defects in Ball Bearing using Vibration Signature Analysis ( Sham Kulkarnia, S.B.Wadkar) 4 ) Industrial Internet of Things monitoring solution for advanced predictive maintenance applications ( Federico Civerchia, Stefano Bocchino, Claudio Salvadori, Enrico Rossi, Luca Maggiani, Matteo Petracca) 5) Predictive Analytics for Complex IoT Data Streams ( Adnan Akbar, Abdullah Khan, Francois Carrez, and Klaus Moessner) 6) Predictive Maintenance of Power Substation Equipment by Infrared Thermography Using a Machine-Learning Approach ( Irfan Ullah, Fan Yang, Rehanullah Khan, Ling Liu, Haisheng Yang, Bing Gao, and Kai Sun)

--

--