In this walk-through, we show how use Azure Batch AI to process multiple forecast models. Azure Batch AI is a cloud service to help data scientists and AI researchers train and test machine learning and AI models at scale in Azure.
The problem addressed in this walk-through is: performing and monitoring multiple and parallel forecast models. Parallelization is the key in many forecast scenarios, as often we need to build many models concurrently, for example one for each sensor and on a regular basis. A typical example of this is when an utility company needs to accurately forecast spikes in demand for energy products and services.
The following scenario focuses on energy demand forecasting where the goal is to predict the future load on an energy grid. The bigger solution is built on Microsoft’s Azure stack and includes multiple cloud services that allow handling data streaming, data processing, model training/predicting, and data storage.
The main component is Batch AI, a cloud service that enables users to submit parallel jobs to a cluster of high performing virtual machines. For this experiment, you will use a Python script to run a recurrent neural network model to predict future energy demand.
The first step is to read the configuration and create Batch AI client:
Now you can define your configuration from the config file:
Next steps are to upload your script to file share in your blob storage and configure your compute cluster:
Finally you are ready to create your compute cluster, monitor the cluster creation and run the Azure Batch AI training job:
In this blog post, we showed how Batch AI can help you train models at scale with parallel execution to optimize the accuracy and performance of your solution.
Batch AI supports custom storage solutions including high-performance parallel file systems and can help you with scheduling your jobs and handling failures during potentially long-running jobs.