Deploy Machine Learning Models with ONNX Runtime and Azure Functions

Cassie
Microsoft Azure
Published in
4 min readJan 24, 2022

Learn how to deploy the ResNet50 model with ONNX Runtime and Azure Functions! We are using ONNX Runtime because it speeds up inference and offers cross platform capabilities.

Our model is already in the ONNX format, so we won’t cover model conversion in this blog. If you want to learn more about converting your model to leverage these capabilities check out the ONNX GitHub examples and the ONNX Runtime Docs.

Video Tutorial:

Prerequisites

Steps to Deploy with Azure Functions

  1. Follow these instructions to create an HTTP Python Azure Function with VS Code. If you prefer to have the function run on a schedule instead of requests, select the timertype when creating the function.
  2. Now that you have created the function, let’s update the score __init__.py script. Below we will breakdown each step in the script.
  • First import the packages:
import logging
import azure.functions as func
import base64
import numpy as np
import cv2
import io
import onnxruntime as ort
  • Then add the main entry function for the Azure Function. We have implemented this function to expect an encoded base64 image.
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed request.')
# Get the img value from the post.
img_base64 = req.params.get('img')
if not img_base64:
try:
req_body = req.get_json()
except ValueError:
pass
else:
img_base64 = req_body.get('img')
if img_base64:
# decode image
img = decode_base64(img_base64)
# preprocess image
img = preprocess(img)
# load model
model_path = '../model/resnet50v2.onnx'
# run model
outputs = run_model(model_path, img)
# map output to class and return result string.
return func.HttpResponse(f"The image is a
{map_outputs(outputs)}")
else:
return func.HttpResponse(
"This HTTP triggered function successfully.",
status_code=200
)
  • Below are helper functions called to decode the base64 string and preprocess the image.
# decode base64 image
def decode_base64(data):
img = base64.b64decode(data)
img = cv2.imdecode(np.fromstring(img, np.uint8), cv2.IMREAD_COLOR)
img = cv2.resize(img, (224, 224))
img = img.transpose((2,0,1))
img = img.reshape(1, 3, 224, 224)
return img
# preprocess image
def preprocess(img_data):
mean_vec = np.array([0.485, 0.456, 0.406])
stddev_vec = np.array([0.229, 0.224, 0.225])
norm_img_data = np.zeros(img_data.shape).astype('float32')
for i in range(img_data.shape[0]):
# for each pixel and channel
# divide the value by 255 to get value between [0, 1]

norm_img_data[i,:,:] = (img_data[i,:,:]/255 - mean_vec[i]) / stddev_vec[i]
return norm_img_data
#load text file as list
def load_labels(path):
labels = []
with open(path, 'r') as f:
for line in f:
labels.append(line.strip())
return labels
# map mobilenet outputs to classes
def map_outputs(outputs):
labels = load_labels('../data/imagenet_classes.txt')
return labels[np.argmax(outputs)]
  • Add run_model function to create and run the InferenceSession
def run_model(model_path, img):
ort_sess = ort.InferenceSession(model_path)
outputs = ort_sess.run(None, {'data': img})
return outputs

3. Update the requirements.txt

Add the below packages to the requirements file:

azure-functions
numpy
onnxruntime
opencv-python

4. Add the model and labels files to the project in the ImageNetHttpTrigger folder.

  • The model can be downloaded from the model zoo. In this example we used the ResNet50v2.
  • The original classes text file can be found here. The one used for this example is slightly simplified. The simplified file can be found here in the GitHub repo for this example.

5. Next we will test the function locally with Postman.

  • First you need to convert the image to a base64 string to post to the endpoint.
# get base64 encoded image
img_path = '../image/goldfish.jpg'
with open(img_path, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
img_base64 = encoded_string.decode('utf-8')
print(img_base64)

Next open Postman to post the string to the local function endpoint:

  • In Postman create the request.
  • Select POST
  • Paste in the local URL
  • Select body, raw, and json.
  • Paste in the json body.
  • Hit f5 to run the function locally in VS Code
  • Once the function is running, grab the endpoint and hit send.

6. Follow the instructions to deploy to Azure with VS Code

  • Once the function has been created in Azure, grab the endpoint and test again in Postman.

Using serverless to deploy machine learning models is super useful but not the only way to operationalize your model. Stay tuned for more content on how to deploy models with Azure!

Resources: ONNX Model Zoo, Source on GitHub, YouTube Video, ONNX Runtime Docs

--

--