Operationalize Your Deep Learning Models with Azure CLI 2.0
In this post you will learn how to deploy your deep learning models on a Kubernetes cluster as a web service via Azure Command-Line Interface (Azure CLI) 2.0. This approach simplifies the model deployment process by using a set of Azure ML CLI commands.
Specifically, we will show how to operationalize a trained Keras (TensorFlow) convolutional neural network (CNN) model. The goal of this multi-class classification problem is to perform object recognition on a set of images from the CIFAR-10 data set.
Model Operationalization Process
First of all, we need to configure the compute environment. Then we need to get the trained model, scoring script, and conda dependency file ready in a local directory. Secondly, we provision necessary Azure resources such as a model management account, and the Kubernetes cluster on which the model will model will be deployed. Lastly, we deploy the web service.
The operationalization process is a 5-step method that helps data scientists and developers to build end-to-end AI solutions. The diagram below summarizes the process and shows its key components:
In the rest of this blog post, we will cover the 5 steps that you need to perform to operationalize a deep learning model.
Compute environment configuration
In this example, we use Deep Learning Virtual Machine (DLVM) — Linux OS, Standard NC6 (6 vcpus, 56 GB memory) machine as the compute resource, on which we train and deploy the model. The Deep Learning Virtual Machine is a specially configured variant of the Data Science Virtual Machine (DSVM) to make it easier to use GPU-based VM instances for training deep learning models. We use following tools on this DSVM: Python 3, Jupyter Notebook, Azure CLI.
Here are the steps to create an instance of the Deep Learning Virtual Machine:
- Navigate to the virtual machine listing on Azure portal.
- Select the Create button at the bottom to be taken into a wizard.
3. The wizard used to create the DLVM requires inputs for each of the four steps enumerated on the right of the figure above.
From your local machine ssh into your DLVM and launch Jupyter Server manually. Below are the steps to launch Jupyter server:
1) In VM’s CLI console, change the conda environment: $ source activate py35
2) Start the jupyter notebook from the command line: $ jupyter notebook — no-browser — ip=*
3) In the output of above command, locate http(s)://[all ip addresses on your system]:port_number/.
An example is, https://[all ip addresses on your system]:9999/.
4) On the Azure portal create an inbound rule for port port_number (e.g. 9999 in above example)
5) On your local machine, open a web-browser and go to http(s)://<VM IP address>:port_number/ as indicated in the above command output. When prompted for password, enter the password you have previously created.
By now, you should have the Jupyter server on and you can access your notebooks from the local copy of repo. Make sure the kernel is set to be Python [conda env: py35].
Train your model
The few lines of code below show you how to import the necessary libraries in Python:
Now your deep learning model is ready to read in the training data. The CNN algorithm will learn from the training data patterns that map the variables to the target, and it will output a model that captures these relationships.
Save your model
You are now ready to save the best model for operationalization. To pick the best model, usually, you need to calculate the loss metric that measures how closely the model’s predictions match the target classes. For classification problems, cross entropy is typically used as the loss metric.
In order to create a web service, you will create a scoring script that will load the models, perform the prediction, and return the result. Azure ML uses init() and run() functions inside this scoring script for that purpose. The init() function initializes the web service and loads the saved model. The run() function uses the model and the input data to return a prediction which is executed on a scoring call.
Provide Azure resources and Kubernetes cluster
In this and the following sections, the Azure resources will be created:
- Resource group defined in variable YOUR_RESOURCE_GROUP
- Machine Learning Model Management
- Cluster Environment (Microsoft.MachineLearningCompute/operationalizationClusters)
- Resource group created during the cluster environment provision (YOUR_RESOURCE_GROUP plus”-azureml-xxxxx”)
- Container Registry
- Container Service
- Many other automatically provisioned resources
Model Deployment as Web Service
Finally, you can now deploy your previously trained model:
In this blog post, we showed you how to operationalize a deep learning model on a Kubernetes cluster as a web service via Azure CLI.
With this tool, it becomes straightforward to take the containerized approach to overcome the dependency problems for deep learning model deployment. Moreover, it makes it convenient to initialize your Azure ML environment by executing very few CLI commands and deploy the model to the cluster.