The Artificial Intelligence Conference in London is a relatively addition to the list of conferences hosted by O’Reilly worldwide. The aim of this conference is to create a forum for the ever-growing AI community to explore the most essential issues and innovations in applied AI. In the conference the various talks covered topics ranging from practical business applications of AI, to compelling AI enabled use cases, to various technical trainings and deep dive into successful AI projects etc.
In our session “A day in the life of a data scientist in an AI company”, we presented a scientific framework to help organizations to systematically discover opportunities to create value from data, qualify new opportunities and assess their fit and potential, then how to build a team to smoothly implement end-to-end advanced analytics pilots and projects, and produce sustainable ongoing business value from data. Specifically we shared a few important concepts, such as the Machine Learning Workflow and the Team Workspace.
In the session, we also introduced the audience to Azure AI’s latest offering Azure Machine Learning Service. Azure Machine Learning Service (Preview) is a cloud service that you can use to develop and deploy machine learning models. Using Azure Machine Learning Service, you can track your models as you build, train, deploy, and manage them, all at the broad scale that the cloud provides.
The Machine Learning Workflow
The Machine Learning workflow is an agile, iterative data science framework to deliver predictive analytics solutions and intelligent applications efficiently. It helps improve team collaboration and learning. It contains a distillation of the best practices and structures that facilitate the successful implementation of data science initiatives.
The goal is to help companies fully realize the benefits of their analytics program. The life cycle outlines the major stages that projects typically execute, often iteratively:
- Business Understanding
- Data Acquisition and Understanding
- Customer Acceptance and Consumption
The following diagram provides a view of the tasks (in blue) associated with each stage of the life cycle:
The Team Workspace
In Azure Machine Learning Service, the workspace represents a central location for a team to collaborate and it manages access to compute targets, data storage, models created, docker images created, webservices deployed and it keeps track of all the experiment runs that were performed with it. Data scientists can manage the authorization and creation of workspaces and experiment from the Python SDK.
You can use Python to get started with Azure Machine Learning. In the snippet above, we are creating a workspace called “Demo” in the resource group “Contoso” which resides in the given subscription. The workspace will be created in the Azure region “eastUS2”.
You can create multiple workspaces, and each workspace can be shared by multiple people. When sharing a workspace, control access to the workspace by assigning the following roles to users:
When you create a new workspace, it automatically creates several Azure resources that are used by the workspace:
- Azure Container Registry — Registers docker containers that are used during training and when deploying a model.
- Azure Storage — Used as the default data store for the workspace.
- Azure Application Insights — Stores monitoring information about your models.
- Azure Key Vault — Stores secrets used by compute targets and other sensitive information needed by the workspace.
Azure Machine Learning Deployment Workflow
With Azure Machine Learning Service, once the data scientist builds a satisfactory model, the trained model can be easily put into production and monitored.
The following diagram illustrates the complete deployment workflow:
In the next few paragraphs, we will show how to perform the following steps:
- Register the model in a registry hosted in your Azure Machine Learning Service workspace
- Register an image that pairs a model with a scoring script and dependencies in a portable container
- Deploy the image as a web service in the cloud or to edge devices
Step 1: Register model
Step 2: Register image
In this second step, you need to create your scoring script, your environment file and your configuration file.
Step 3: Deploy image
You can deploy registered images into the cloud or to edge devices. here we deploy it to Azure Container Instances, that offers a simple way to run a container in Azure, without having to provision any virtual machines and without having to adopt a higher-level service.
In this blog post, we outline the various aspects that need to be addressed from data collection to metrics for a successful AI model to be used in a production environment. In particular, we introduced the audience to Azure’s latest cloud analytics environment that makes it easy to collect data, analyze, experiment, and build a model for any organization to use. In the last part, we showed how to use Azure Machine Learning Service to deploy your models to to Azure Container Instances.
- AI London Slide Deck: https://www.slideshare.net/FrancescaLazzeriPhD/a-day-in-the-life-of-a-data-scientist-in-an-ai-company
- Azure Machine Learning Services: https://aka.ms/AMLServices
- Visual Studio Code Tools for AI: https://aka.ms/VSCodeToolsAI
- Data Science Virtual Machine: https://aka.ms/AzureDSVM
- Team Data Science Process: https://aka.ms/TeamDataScience