The Machine Learning Lifecycle and MLOps: Building and Operationalizing ML Models — Part I

Kevin Petrie
Jul 1 · 7 min read
Source: Eckerson.com

This article was originally published at eckerson.com

Machine learning was supposed to make things easy by computerizing human cognition. But it made life harder than ever for the data teams tasked with implementing it.

A rising number of enterprises implement machine learning (ML) to improve revenue and operations as they digitally transform their businesses. But ML introduces operational complexities and risks that need careful attention. Data teams must holistically manage the ML lifecycle to make their projects efficient and effective.

This blog kicks off a series that examines the ML lifecycle, which spans (1) data and feature engineering, (2) model development, and (3) ML operations (MLOps). This blog defines machine learning and then examines the data and feature engineering stage. Part 2 of the blog series will examine model development and MLOps. Subsequent blogs will examine the roles of key stakeholders — the business leader, data engineer, data scientist and developer — because these roles in particular must acquire new skills and collaborate to make ML work. The stakes run high because many projects today fail due to siloed roles and disjointed processes.

What is Machine Learning?

Let’s start at the beginning. Machine learning (ML) is a subset of artificial intelligence in which an algorithm discovers patterns in data. These patterns help people or applications predict, classify, or prescribe a future outcome. ML relies on a model, which is essentially an equation that defines the relationship between data inputs and outcomes. ML applies various techniques to create this model, including supervised learning, which studies known prior outcomes, and unsupervised learning, which finds patterns without knowing outcomes beforehand.

Once you create and train the ML model on historical data, you apply it to live production data. The model generates a score that helps people and applications create business value by predicting, classifying, or prescribing future outcomes — and taking action.

While the AI pioneer Arthur Samuel coined the term “machine learning” in 1959, the technology really gained steam in the 1980s and 1990s as an alternative to manual predictive models.

Machine learning (ML) is a subset of artificial intelligence in which an algorithm discovers patterns in data that help predict, classify, or prescribe an outcome.

Enterprises address many types of business problems with ML. For example:

  • A commercial bank uses ML to classify whether a transaction has a high risk of fraud based on merchant identity, merchant location, and size vs. prior transactions. Transactions classified as high risk trigger an extra authentication request of the merchant.
  • A real-estate firm uses ML to prescribe the market price of houses based on zip code, recent transactions, and local school ratings. These prescribed prices guide the asking and offer prices that agents recommend to their clients.
  • Nurses in a hospital use ML to classify the risk of major infections based on demographic data and patient vital signs. High risk classifications alert the nurses and prompt them to proactively treat patients.
  • A law firm uses ML to classify documents and extract information, such as key concepts, relevant laws, and primary stakeholders, to expedite research efforts.
  • A manufacturer uses ML to study physical sensors and factory service records to predict when robotic arms will break down. A prediction below a certain threshold triggers an alert for the plant manager to deploy a service technician.

To address use cases like these in large and complicated enterprise environments, you need to manage the ML lifecycle holistically. This entails three stages and nine individual steps.

  • Ingest and transform your input data (i.e., your historical data), label your outcomes, then define and store your features.
  • Select the ML technique you will use (such as linear regression, classification, or “random forests” that make use of decision trees), train your models, then store and manage them.
  • Put those models to work! Implement them and operate them in production workflows. Monitor their performance and the accuracy of their output.

Most importantly, data teams must rinse and repeat. They must identify data drift — i.e., changes in market conditions or other aspects of your environment — then pull their ML models out of production, re-train those models and re-implement them. Figure 1 illustrates the three stages of the ML lifecycle.

Source: eckerson.com

Data and Feature Engineering

Let’s define each step of data and feature engineering, and who performs it. To set the table for upcoming blogs, we’ll also describe (in italics) the key challenges that make ML projects an “all-hands-on-deck” endeavor. Busy business leaders, data engineers, data scientists and developers need to acquire new skills and help one another.

  1. First, data scientists, data engineers, and ML engineers need to collect all the historical input data that’s potentially relevant to the business problem they need to solve. They design, configure and deploy data pipelines that ingest the input data into a repository such as a data lake. They merge, cleanse and format the data, The data scientist provides close oversight to ensure the resulting dataset fits analytics requirements.

Data engineers need to manage high volumes, varieties and velocities of data across heterogeneous hybrid and multi-cloud environments. They collaborate with data scientists to transform data into a usable format and structure for ML. Data engineers also need to create pipelines that data scientists can access and manipulate.

2. Next data engineers, ML engineers and data scientists collaborate with business owners to “label” various outcomes in their historical data sets. This means they add tags to data to easily identify historical outcomes, such as robotic arm failures, fraudulent transactions or the price of recent house sales. While data labeling is trivial in those examples, it can get tricky with unstructured data. You need to label historical images — for example, “dogs,” “cats,” etc. — to help the algorithm create an accurate ML model for image recognition. Similarly, you need to label customer emails and social media posts as “positive” or “negative” to create an accurate model for classifying customer sentiment. You can view a label as the variable you want to predict.

Data engineers and data scientists need to label outcomes accurately and at high scale. This requires a programmatic approach, automation, and assistance from business owners that best understand the domain.

Note that labeling applies to supervised ML only. Unsupervised ML, by definition, studies input data without known outcomes, which means the data has no labels.

3. Now the data scientist, data engineer and ML engineer employ feature engineering, which means to extract or derive, then share “features” — the key attributes that really drive outcomes — from all that input data. Features become the filtered, clean inputs for an ML algorithm to study, so that it does not drown in data while creating the model. Feature engineering can dictate the success or failure of ML projects: without it, you have “garbage in, garbage out.” It entails some artwork because it requires domain knowledge and judgment as well as statistical techniques. For example:

  • The data scientist finds from conversations with realtors that home buyers always cite recent sale prices when determining their own offer price. The recent home prices therefore become a feature.
  • The data scientist and data engineer use a program to count the number of times keywords appear in the service records of (1) repeat customers and (2) former customers. The most frequent words or phrases — perhaps certain product names, adjectives such as “thrilled” or “unacceptable” — become features.

Some enterprises now use feature stores to assist their feature engineering efforts. Feature stores are platforms for defining features, curating them and then serving them to various ML algorithms and models. They also can assist with data transformation as described in step 1 above.

Data scientists need to consult business domain experts to make the right judgment calls about features. They also need to work closely with data engineers to create, manage and reuse the right features for numerous models in their organization.

Now that data teams have ingested and transformed their historical input data, labeled the historical outcomes and engineered their features, they are ready to start building the ML model. This model will define the relationship between features and labels, as shown in Figure 2.

Source: Eckerson.com

The ML model is an equation that defines how “features,” or key attributes of your input data, relate to outcomes or predictive variables known as “labels”

Data and feature engineering steers the success or failure of MLOps, which in turn steers the success of enterprise ML projects. Data teams that assemble all the right data, label their outcomes correctly, and devise the right features, will make sure those machines do good rather than harm. They might just make things easier for the companies that implement them.

Now that we understand the data and feature engineering phase, we will examine ML model development and operationalization in Part 2 of our blog series.

Feature Stores for ML

AI, Data, and everything in between