Serverless Machine Learning — Free Online Course

Jim Dowling
Feature Stores for ML
4 min readOct 6, 2022

The time needed to create AI-enabled Services with machine learning has been decreasing in recent years. Still, when it comes to putting models in production, developers typically spend weeks or even months learning about the infrastructure need to operate analytical or operational machine learning systems.

Lift your ML skills up to the sky by building prediction services, not building ML infrastructure. [Image from Pixabay]

With serverless machine learning (ML), you can build analytical or operational machine learning systems without having to first become an an expert in Kubernetes or cloud computing. You only need to be able to write Python programs that:

  • process data into features (feature pipelines),
  • train models using features (training pipelines), and
  • make predictions with new data (inference pipelines).
Serverless ML systems consist of three independent feature/training/inference pipelines and either provide a serverless UI or are used to AI-enable an existing service. [Image by author]

These Python programs can easily be scheduled to run as pipelines on serverless compute infrastructure. The features and models your pipelines produce should managed by a serverless feature store / model registry. And there are also serverless user interfaces (UIs) that can be used to bring your model to life. Even the UI can be written in Python, for example, with Streamlit Cloud.

Build a Serverless Prediction Service by the end of the First Module

Your first end-to-end prediction service, including model performance monitoring. [Image by author]

The first module includes background theory on how to break up monolithic end-to-end ML pipelines into feature, training, and inference programs. We build a prediction service using the Iris Flower Dataset. The service generates a new flower every day, and the model predicts what the Iris flower is. A Github pages website not only shows the predicted/actual flowers, but also provides model monitoring support. It shows a recent history of predictions and outcomes (if the model is performing well), and a confusion matrix of all historical predictions to help you understand where the model is not as reliable (for example, confusing Virginica and Versicolor flowers).

The Iris Flower Prediction Service above runs on free serverless services:

  • Github Actions for running feature and inference pipelines on a schedule,
  • Hopsworks.ai for storing features and models, and
  • Github Pages to show the UI for our Iris Flower Prediction Service.
The first prediction service you build uses free serverless services: Github Actions for compute, Hopsworks.ai for feature and model storage, and Github Pages for a User Interface. [Image by author]

Prerequisites

The only prerequisite is that you can program in Python. There is a basic introductory video for those with no prior experience in machine learning, and, of course, it is advantageous to have already taken a course in machine learning. Experience in Pandas and Github is also helpful, although not required.

Learning Outcomes

On successful completion of the serverless ML course, you should have acquired the following skills:

  • Learn to develop and operate AI-enabled (prediction) services on serverless infrastructure
  • Develop and run serverless feature pipelines
  • Deploy features and models to serverless infrastructure
  • Learn about model independent vs model dependent transformations
  • Train models and and run batch/inference pipelines
  • Develop a serverless UI for your prediction service
  • Learn MLOps fundamentals: versioning, testing, data validation, and operations
  • Develop and run a real-time serverless machine learning system

Course Contents

The course covers the following topics.

  • Pandas and ML Pipelines in Python. Write your first serverless App.
  • The Feature Store for Machine Learning. Feature engineering for a credit-card fraud serverless App.
  • Training Pipelines and Inference Pipelines
  • Bring a Prediction Service to Life with a User Interface (Gradio, Github Pages, Streamlit)
  • Automated Testing and Versioning of features, training data, and models
  • Real-time serverless machine learning systems. Project presentation.

Who is the target audience?

You have taken a course in machine learning (ML) and you can program in Python. You want to take the next step beyond training models on static datasets in notebooks. You want to be able to build a prediction service around your model.

Why is this course different?

You don’t need any operations experience beyond using GitHub and writing Python code. You will learn the essentials of MLOps: versioning artifacts, testing artifacts, validating artifacts, and monitoring and upgrading running systems. You will work with raw and live data — you will need to engineer features in pipelines. You will learn how to select, extract, compute, and transform features.

Will this course cost me money?

No. You will become a serverless machine learning engineer without having to pay to run your serverless pipelines or to manage your features/models/user-interface. We will use Github Actions and Hopsworks that both have generous time-unlimited free tiers.

Register now at Serverless ML Course

The serverless ML course, www.ml-serverless.org, is organized and run by featurestore.org. The course started in October 2022, and will run for 6 weeks, but the course is self-paced, so you can do modules when it suits you.

--

--

Jim Dowling
Feature Stores for ML

@jim_dowling CEO of Logical Clocks AB. Associate Prof at KTH Royal Institute of Technology Stockholm, and Senior Researcher at RISE SICS.