Training Your First Distributed PyTorch Lightning Model with Azure ML
Full end to end implementations can be found on the official Azure Machine Learning GitHub repo.
description: learn how to train and log metrics with PyTorch Lightning PyTorch Lightning is a lightweight open-source…
If you are new to Azure you can get started a free subscription using the link below.
Create your Azure free account today | Microsoft Azure
Create personalized experiences with AI Build connected, cross-platform experiences-tailored to customer interactions…
What is PyTorch Lightning?
PyTorch Lighting is a lightweight PyTorch wrapper for high-performance AI research. Lightning is designed with four principles that simplify the development and scalability of production PyTorch Models:
- Enable maximum flexibility
- Abstract away unnecessary boilerplate, but make it accessible when needed.
- Systems should be self-contained (ie: optimizers, computation code, etc).
- Deep learning code should be organized into 4 distinct categories, Research code (the LightningModule), Engineering code (you delete, and is handled by the Trainer), Non-essential research code (logging, etc… this goes in Callbacks), Data (use PyTorch Dataloaders or organize them into a LightningDataModule).
Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code which is perfect for taking advantage of distributed cloud computing services such as Azure Machine Learning.
Our native platform for training models at scale on the cloud! Sign up for early access here To use grid, take your…
Additionally PyTorch Lighting Bolts provide pre-trained models that can be wrapped and combined to more rapidly prototype research ideas.
System / PyTorch ver. 1.6 (min. req.) 1.6 (latest) Linux py3.6 / py3.7 / py3.8 OSX py3.6 / py3.7 Windows py3.6 / py3.7…
What is Azure Machine Learning?
Azure Machine Learning (Azure ML) is a cloud-based service for creating and managing machine learning solutions. It’s designed to help data scientists and machine learning engineers to leverage their existing data processing and model development skills & frameworks.
Azure Machine Learning provides the tools developers and data scientists need for their machine learning workflows, including:
- Azure Compute Instances that can be accessed online or linked to remotely with Visual Studio Code.
What is an Azure Machine Learning compute instance? - Azure Machine Learning
An Azure Machine Learning compute instance is a managed cloud-based workstation for data scientists. Compute instances…
Connect to compute instance in Visual Studio Code (preview) - Azure Machine Learning
In this article, you'll learn how to connect to an Azure Machine Learning compute instance using Visual Studio Code. An…
- Code, Data, Model Management
Tutorial: Get started with machine learning - Python - Azure Machine Learning
In this four-part tutorial series, you'll learn the fundamentals of Azure Machine Learning and complete jobs-based…
- Scalable Distributed Training and Cheap Low Priority GPU Compute
What is distributed training? - Azure Machine Learning
In this article, you learn about distributed training and how Azure Machine Learning supports it for deep learning…
- Auto ML and Hyper Parameter Optimization
Tune hyperparameters for your model - Azure Machine Learning
Automate efficient hyperparameter tuning by using Azure Machine Learning HyperDrive package. Learn how to complete the…
What is automated ML / AutoML - Azure Machine Learning
Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time…
- Container Registry, Kubernetes Deployment and MLOps Pipelines
MLOps: ML model management - Azure Machine Learning
In this article, learn about how to use Azure Machine Learning to manage the lifecycle of your models. Azure Machine…
Deploy ML models to Kubernetes Service - Azure Machine Learning
Learn how to use Azure Machine Learning to deploy a model as a web service on Azure Kubernetes Service (AKS). Azure…
- Interpretability Tools and Data Drift Monitoring
Interpret & explain ML models in Python (preview) - Azure Machine Learning
In this how-to guide, you learn to use the interpretability package of the Azure Machine Learning Python SDK to perform…
Analyze and monitor for data drift on datasets (preview) - Azure Machine Learning
Important Detecting data drift on datasets is currently in public preview. The preview version is provided without a…
Check out some AzureML best practices examples at
Welcome to the AML examples! Clone this repository and install required packages: git clone…
This repository contains content of a four part workshop of using Tensorflow 2.0 on Azure Machine Learning service. The…
With the advantages of PyTorch Lighting and Azure ML it makes sense to provide an example of how to leverage the best of both worlds.
Step 1 — Set up Azure ML Workspace
Connect to the workspace with the Azure ML SDK as follows
from azureml.core import Workspace
ws = Workspace.get(name="myworkspace", subscription_id='<azure-subscription-id>', resource_group='myresourcegroup')
Step 2 — Set up Multi GPU Cluster
Create compute clusters - Azure Machine Learning
Learn how to create and manage a compute cluster in your Azure Machine Learning workspace. You can use Azure Machine…
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your GPU cluster
gpu_cluster_name = "gpu cluster"
# Verify that cluster does not exist already
gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print('Found existing cluster, use it.')
compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_NC12s_v3',
gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
Step 3 — Configure Environment
To run PyTorch Lighting code on our cluster we need to configure our dependencies we can do that with simple yml file.
We can then use the AzureML SDK to create an environment from our dependencies file and configure it to run on any Docker base image we want.
from azureml.core import Environment
env = Environment.from_conda_specification(environment_name, environment_file)
# specify a GPU base image
env.docker.enabled = True
env.docker.base_image = (
Step 4 — Training Script
Create a ScriptRunConfig to specify the training script & arguments, environment, and cluster to run on.
We can use any example train script from the PyTorch Lighting examples or our own experiments.
Step 5 — Run Experiment
For GPU training on a single node, specify the number of GPUs to train on (typically this will correspond to the number of GPUs in your cluster’s SKU) and the distributed mode, in this case DistributedDataParallel (
"ddp"), which PyTorch Lightning expects as arguments
--distributed_backend, respectively. See their Multi-GPU training documentation for more information.
from azureml.core import ScriptRunConfig, Experiment
cluster = ws.compute_targets[cluster_name]
src = ScriptRunConfig(
arguments=["--max_epochs", 25, "--gpus", 2, "--distributed_backend", "ddp"],
run = Experiment(ws, experiment_name).submit(src)
We can view the run logs and details in realtime with the following SDK commands.
from azureml.widgets import RunDetails
Next Steps and Future Post
Now that we’ve set up our first Azure ML PyTorch lighting experiment. Here are some advanced steps to try out we will cover them in more depth in a later post.
1. Link a Custom Dataset from Azure Datastore
This example used the MNIST dataset from PyTorch datasets, if we want to train on our data we would need to integrate with the Azure ML Datastore which is relatively trivial we will show how to do this in a follow up post.
Create Azure Machine Learning datasets to access data - Azure Machine Learning
In this article, you learn how to create Azure Machine Learning datasets to access data for your local or remote…
2. Create a Custom PyTorch Lightning Logger for AML and Optimize with Hyperdrive
In this example all our model logging was stored in the Azure ML driver.log but Azure ML experiments have much more robust logging tools that can directly integrate into PyTorch lightning with very little work. In the next post we will show how to do this and what we gain with HyperDrive.
Currently no script to run this one, but will provide one in the future. It shows a simpler example to get used to the…
[DRAFT] Add logger for Azure Machine Learning by dkmiller · Pull Request #223 ·…
Before submitting Was this discussed/approved via a Github issue? (no need for typos and docs improvements) Did you…
3. Multi Node Distributed Compute with PyTorch Lightining Horovod Backend
In this example we showed how to leverage all the GPUs on a one Node Cluster in the next post we will show how to distribute across clusters with the PyTorch Lightnings Horovod Backend.
4. Deploy our Model to Production
In this example we showed how to train a distributed PyTorch lighting model in the next post we will show how to deploy the model as an AKS service.
How and where to deploy models - Azure Machine Learning
Learn how to deploy your machine learning model as a web service in the Azure cloud or to Azure IoT Edge devices. The…
If you enjoyed this article check out my post on 9 tips for Production Machine Learning and feel free to share it with your friends!
9 Advanced Tips for Production Machine Learning
TLDR; Incorporating a new state of the art machine learning model into a production application is a rewarding yet…
I want to give a major shout out to Minna Xiao from the Azure ML team for her support and commitment working towards a better developer experience with Open Source Frameworks such as PyTorch Lighting on Azure.
About the Author
Aaron (Ari) Bornstein is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with the Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.