Remaining Useful Life Insighter | Ram Arunachalam
Businesses that involve complex and high value machinery in industries such as aerospace, construction and automotive require a high uptime to remain profitable.
Remaining Useful Life (RUL) is a metric which helps businesses monitor and predict the time-to-failure using real time data. This enables them to take preventive actions that ensure that the machinery does not fail during critical operations and schedule maintenance activities around maximizing their productive use.
In the case of automobiles, autonomous driving technology will necessitate RUL prediction of vehicle systems and components to ensure safe driving at all times. Imagine a near future of autonomous robo-taxis operated by an OEM. Maximizing the utility value of the fleet depends on how well this OEM is able maximize number of kilometers in a given time. Any time spent in breakdown will be a loss to the business. A car that can report its problems say battery or motor failure can enable the OEM to prevent premature failure.
Predictive analytics can estimate the remaining useful life of different parts of a vehicle, and taking it a step further, prescriptive analytics can help minimize the degradation of these parts by recommending best usage conditions (or in this case the best driving conditions).
The approach towards finding the remaining useful life range from simple empirical modeling, to more accurate but painstaking deep learning models.
- Empirical modeling: essentially a formula describing the relationship between outputs and inputs. Can be based on physical parameters observed in the lab. Not very accurate.
- Traditional Machine Learning models: can be used if there is sufficient labelled data available. In this case, the ‘label’ required is a physical reading of the remaining useful life — which is difficult to obtain especially in large volume. With low volume of labelled data, the traditional machine learning models are difficult to train and use.
- Deep Learning: Machine Learning model that use Neural Networks to figure out a relationship between input ‘features’ and output even with low volume of labelled data.
In this blog post, we will take a look at how Ram Arunachalam used deep learning methods to come with a Remaining Useful Life model for electric motors.
Sidebar: let’s quickly go over the basic idea of a deep machine learning:
The key difference between traditional machine learning and deep learning is in the feature extraction step. In case of traditional machine learning, the features (or inputs) for the algorithm need to be found manually, but in case of deep learning, we can let the algorithm itself find the most features.
Ram’s implementation of Deep Learning for Remaining Useful Life
Ram has been able to predict RUL without domain knowledge of motor degradation characteristics using deep learning techniques. This also reduces the time taken for RUL estimation as feature engineering (i.e. understanding of parameter relationships with failures) is not required as in the case of traditional methods.
One of the major challenges for Prognostics applications is that high quality labeled training data for failure scenarios are not obtained easily. In such case, supervised learning has the potential to predict RUL with high accuracies yet with relatively small amount of data.
Use of supervised Deep Learning techniques such as auto encoders make use of an initial pre-training stage in which the raw unlabeled data is processed to extract high level abstract features automatically. The weights are initialized at a good local minima as a result of this procedure and acts as a better input for further fine tuning with supervisory deep learning methods. This helps in reducing the model development time and improve the accuracy of RUL predictions.
Another issue in such uses cases is diverse failure modes and operating conditions that increases the degradation complexity. Supervised learning on pre-training stage can be used to cope with this problem. Pre-training with supervised learning algorithm extracts the degradation related features before supervised learning. This helps in better understanding of the underlying degradation mechanism. The aim of this system is to improve the accuracy of RUL prediction with the help of supervised pre-training before supervised fine-tuning.
So what were the results?
The model evaluated against ground truth data and following fit is observed.
The graph shows that the model prediction follows the same degradation pattern. The performance of the deep learning model is calculated in terms of standard deviation of the residuals or the prediction errors. This shows the model is able to handle noisy data well and also generalize well with the fluctuations.
To check if the neural network is predicting accurately, we can run some tests like Confusion Matrix (suitable for classifications) or calculate the R2 Score (suitable for regression). In Ram’s case since the output is regression — quantitative value — he used R2 (R squared) score calculation to check the accuracy of the model.
Ram also used Grid Search to tune the model until the team was happy with the results. Eventually, in Ram’s model the R2 score was 0.75 or 75% close to reality. Given that there were only 120 labelled instances of data available, this quite a good result!
RUL estimation is a critical step in asset management for businesses. A reliable prediction model can yield business results by predicting the best time for replacement (reducing downtime), or reducing ageing by changing operating conditions (saving costs).
The approach used by Ram promises RUL prediction efficiency with reduced amounts of labeled training data. Therefore, supervised learning holds promise for real world prognostics applications even under the constraints of limited labelled data, complex failure modes and also limited time to market.