Predictive Maintenance: Why It’s Important and How to Implement It

Rohit Gupta
Sentenai
Published in
4 min readFeb 23, 2018

For those companies who have been collecting machine data for years, an incredible opportunity exists in making this data actionable. Utilizing actionable data can offer an invaluable competitive advantage by enabling companies to streamline operational processes, optimize demand forecasting and better understand their customers’ propensity to buy. In particular, predictive maintenance (PdM) is a core benefit of making machine data actionable, as it can decrease downtime and waste, leading to greater organizational efficiency.

Turning the idea of PdM into an actual deployment can be complex, however there are several best practices that can help drive results early in the process. For instance, it’s best to start small in order to learn a repeatable process on a set of data focused on a singular use case. This exposes all stakeholders to the steps required and can help frame future PdM project discussions.

If you’re an organization looking to build a PdM pilot, get started by following these six steps:

1. DETERMINE THE USE CASE.

The goal for any PdM pilot should be to show that you have a dataset with a high likelihood of offering actionable insights that can lead to a specific business outcome. Otherwise, your PdM use case won’t make it out of research and development. To determine if a dataset exists to support to your use case, ask:

  • Do we have enough data — both historically and currently being generated — to tell the complete story of the machine? (This can either involve datasets from a few machines operating for a couple of years or datasets from many machines operating during a shorter period of time).
  • Can we access this data off the factory floor? (For example, can we upload historic data or connect machines via IoT gateways to start posting the data?)
  • Do we have any other data sources that can augment this data, such as log files, maintenance records or weather data?
  • Do we have experts available who can describe the patterns of success or failure for a particular machine?
  • What is our desired business outcome? (For instance, is our goal to increase margins, reduce downtime or provide new offerings to customers?)

2. AGGREGATE AND ORGANIZE RELEVANT DATASETS.

Once you have your use case defined, the next step is to aggregate your data into a centralized place. There are typically two phases to this process: The first is uploading historic datasets to populate the models. This data may live in a variety of places, and typically requires a one-time effort for each dataset. The second phase is setting up the systems to post data continuously. Depending on connectivity, this could be done in batches or readings as they happen.

3. EXPLORE THE DATA FOR INSIGHTS.

Now it’s time to begin exploring what questions can be answered with your datasets. Having subject matter experts on hand is critical here, as no one else will know the machines’ behaviors better. Work with them to establish what patterns can be represented by the data in question, and determine what real-world problems faced by your experts can be assisted or even solved by having a predictive model available.

4. DEVELOP, TEST AND REPEAT MACHINE LEARNING MODELS.

Once you have your use case defined and have asked the necessary questions of your data, you can begin to develop machine learning models, test these models and repeat this process for as many scenarios as possible. The outcome from this phase might be realizing that the questions asked of the data aren’t appropriate for predictive models, or it could be discovering a model that’s worth testing on a live machine.

5. DEPLOY TO A CONTROLLED GROUP OF MACHINES.

Now it’s time to validate the success of your machine learning model by deploying it to a group of machines. Depending on what you’re looking to prove, this can involve a few models running across multiple groups, or the same model running on a single group. No matter what the deployment process looks like, though, it’s critical to have a measurable outcome.

Also, keep in mind that the goal of building a PdM pilot shouldn’t just be to produce a deployable machine learning model. Rather, it should also help refine the internal processes required to turn concepts or ideas to actual models; there will undoubtedly be other projects in your company’s future that will require developing more machine learning models for other use cases.

6. PUT IT INTO PRODUCTION.

After walking through the above steps to complete your pilot and obtain success metrics for your machine learning model, it’s time to deploy your machine learning model to any/all machines in your organization. Then you will begin receiving actionable insights that can directly impact both day-to-day tasks and long-term goals.

Depending on the amount of historical machine data available and the complexity of the questions you’re looking to answer, the PdM deployment process can take between three to six months, and it can be repeated as many times as necessary. By continuing to ask relevant questions and explore available datasets, these models will keep growing, allowing your company to continually make refined, intelligent operational decisions as your business scales.

This story originally appeared in Dataconomy.

--

--

Rohit Gupta
Sentenai

Co-founder @sentenai. Technology geek, gadget fiend, sports fan. Formerly @opuscapital, @techstars boston, and @mit.