Gaurav Kaila
Oct 22, 2018 · 8 min read

In September of 2018, Microsoft launched Azure Machine Learning Service that helps data scientists and machine learning engineers build end-to-end machine learning pipelines in Azure without worrying too much about dev-ops behind training, testing and deploying your model. Azure has a number of offerings in this space such as Azure Notebooks, Azure Machine Learning Studio and Azure Batch AI. Comparison of the offerings is available here.

This article will focus on the newest offering by Azure, and it will cover the basic concepts including an example of training your own machine learning model. Following is the breakdown of the article:

  1. Introduction to Azure Machine Learning (referred to as AML from now on)
  2. Setup Azure Machine Learning with VS-Code
  3. Using AML pipeline, with an example

Section 1: Introduction to AML

In order to work with AML, you need to be aware of the following concepts:

Workspace: It is a centralised place for all your artefacts (experiments, runs, deployments, images). It helps keep a history of all your work including registering machine learning models that can be used for prediction. To create a workspace:

  • Logon to the Azure portal
  • Search for Machine Learning Service workspaces
  • Add a new workspace

When you create a workspace, Azure will create container registry, storage, application insights and key vault for you. This enables you to store docker images in the container registry, store data in storage, monitor model performance in application insights and store sensitive information in key vault including compute target keys.

Experiment: Within the workspace, you can define experiments that contain individual training runs. Each training run you perform will associate itself with an experiment and a workspace. Defining logical high level experiments will help you monitor various training runs and their outputs.

Model: This is the heart of any machine learning process. AML provides the ability to register (version) models produced during each training run. Each registered model is physically stored in the storage provided while creating the workspace. This enables machine learning practitioners to test and deploy variety of versioned models without having to store them locally. Models produced from any of the machine learning libraries (scikit-learn, TensorFlow, PyTorch, etc) can be registered.

Image: Docker images containing the model and the prediction (scoring) script can be created once the model is tested. AML provides the ability to create these images and version them, similar to model versioning. This enables multiple Docker images to be created and deployed with different versions of the model.

Deployment: The Docker images created can be deployed using Azure Container Instances. This is where the true power of AML lies. Automatically creating a load-balanced HTTP endpoint without having to worry about underlying infrastructure or deployment configuration helps machine learning practitioners focus on training and evaluating their model. AML helps collect application insights to monitor the performance of your deployment.


Section 2: Setup Azure Machine Learning with VS-Code

In the above section, we went through the basic concepts of AML. It is important to understand these concepts before one can start working with this service. In this section, we will focus on setting up our AML environment.

Setup VS-Code: My preferred IDE for working with AML is VS-Code (referred to VSC from now on). This is motivated from the fact that there is an Azure extension available in VSC that enables seamless connection to the AML workspace helping us get a visual view of our workspaces, experiments, models, images, and deployments.

Follow the steps below to setup VSC:

  1. Install VSC
  2. Download the Python, Azure, Azure Machine Learning and Visual Studio Code Tools for AI extensions via the marketplace in VSC. Enable the extensions, and if correctly configured, an Azure icon will appear on the left-hand side bar.
  3. Login to your Azure account via the Azure extension. You can do this by refreshing the Azure extension. A login popup will appear with instructions. Once logged in, it will now show you all your workspaces.
Azure Extension for VSC

Fork and clone AML repo: In order to get quickly started with AML, I have created a GitHub repo with an example that will help you quickly train and deploy a simple sklearn regression model. Please fork and clone the repository before continuing.

Setup conda environment: In the repo, there is a conda environment file available (environment.yml). As a good practice, I suggest creating a new virtual environment using conda. This will isolate your dependancies for the AML pipeline without breaking dependencies for your existing projects. To create a new conda environment with all the required packages:

conda env create -f environment.yml

This will create a new conda environment called myenv. Note: You can change the name of the environment by editing environment.yml file. Now you are ready to train your first model using the AML pipeline.

Section 3: Using AML pipeline, with an example

To configure the pipeline, a configuration file is provided in the repo. Let us go over each section in the configuration file and its corresponding module in the pipeline.

Part1Workspace Configuration (ws_config):

Parameters for ws_config section in ml-config.ini (all this information can be found on the Azure portal)

In order to configure your workspace, a separate configuration file is produced under aml_config/config.json. This can be done by entering the above parameters in the ws_config section of the ml-config.ini file and running, (Note: you can run this via VSC but make sure to activate the conda environment first)

(myenv)python generate_wsconfig.py
Output of generate_wsconfig.py

Note: This step needs to be done once for each project you create in the pipeline unless you change your resource group and/or workspace.

Part2Training (train):

Parameters for train section in ml-config.ini

As an example, my configuration file looks like this:

Sample configuration for train section

train.py will contain your training code and an example of a simple sklearn regression model training is provided in the repository. main_train.py is the driver for training. The code is well commented and self explanatory, but I will go through the key parts of the code.

  • We create an experiment with the name specified in the ml-config.ini file and configure an Azure VM. A new compute instance with specified configuration is created if no previous instance of the same name exists.
  • Upload data from local folder to the Azure storage provided with the workspace. The data is downloaded on the training VM from this storage location.
  • Submit your training script to the Azure training VM. It is recommended to store the train.py in the root directory of the project.
  • run is the logger object that monitors the performance of the model. It can log various metrics such as loss, accuracy, etc. In the train.py script, the run object is already instantiated and can be used for recording model metrics.
  • Register the trained model stored at “./outputs/ridge_1.pkl”. This is the same location you store your model in the train.py file.

Now you can run your training by and it should produce the following output:

(myenv)python main_train.py
Output (1/2) of main_train.py: creating VM and uploading local data to Azure storage
Output(2/2) of main_train.py: Listing all data files, path of trained model and registering model
After training, you can view and download your model locally for testing using the Azure extension in VSC

Part3Create Docker Image (docker):

As an example, my configuration file looks like this:

Sample configuration for docker section

The above downloaded model can be evaluated and it can be used to create a Docker image. score.py script is used for prediction and an example is provided in the repository. create_docker.py is the driver for creating the docker image and is well commented. I will go through the key parts of the code.

  • Retrieve registered model by name and version. You can use the VSC IDE to look up the name and version of the model and feed that into the ml-config.ini file.
  • Define the required pip and conda packages in the configuration file for running prediction on the trained model.
  • Create Docker image from the model, prediction script (score.py) and required dependencies. Tags can be added for the Docker image.

Now you can create the docker image by running,

(myenv)python create_docker.py
Output of create_docker.py

You can now see the Docker image being created under Images section of your workspace in the VSC IDE.

Part4Create Deployment (deploy):

As an example, my configuration file looks like this:

Sample configuration for deploy section

To deploy the above created Docker image, run:

(myenv)python create_deployment.py

I will go through the important parts of the create_deployment.py script,

  • Choose the Docker image by providing the name and version. You can use the VSC IDE to look up the name and version of the model and feed that into the ml-config.ini file.
  • Create deployment for the given number of cpu cores and memory. Tags can be added for the deployed service.

The output will look something like this,

Output of create_deployment.py

After deploying the service, you can access the HTTP endpoint from the properties.json file of the deployment.

View service properties to get the HTTP endpoint of the service

Part5Test Deployment:

I am using Postman to test the service with the following input for the above created sklearn regression service.

{
"data":
[
[1,2,3,4,5,6,7,8,9,10],
[10,9,8,7,6,5,4,3,2,1]
]
}
Output of the HTTP request to the deployed service

To recap,

  1. We went through the basic concepts of the AML in Section 1
  2. Setup VS-Code and conda environment for AML in Section 2
  3. We went through a step-by-step example to train, deploy and test a simple sklearn regression model in Section 3.

I encourage you use this pipeline for your machine learning use-cases including training and deploying deep learning models.


Disclaimer: The views represented in this article our those of the author and not of EY Ireland.

For more information: https://www.ey.com/gl/en/issues/business-environment/ey-global-innovation

EY Ireland

This publication is contributed to by EY Ireland community of analytics and AI professionals. In this publication we will bring you coverage of latest developments in the field of AI, Analytics and RPA.

Thanks to Urvesh Bhowan

Gaurav Kaila

Written by

Data Science Manager @EY and Chief Data Scientist @IdeaChain; A hub for ideas, discussion and collaboration -http://ideacha.in

EY Ireland

This publication is contributed to by EY Ireland community of analytics and AI professionals. In this publication we will bring you coverage of latest developments in the field of AI, Analytics and RPA.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade