Medical Image Classification using Azure Functions and Cognitive Services


This article is part of [#ServerlessSeptember]( You’ll find other helpful articles, detailed tutorials, and videos in this all-things-Serverless content collection. New articles from community members and cloud advocates are published every week from Monday to Thursday through September.

Find out more about how Microsoft Azure enables your Serverless functions at [](


We’ve all worked on some form of machine learning project, have we not? What happens to the project for which you stayed up all night staring at the epochs and hoping the accuracy increases? Most of these projects end up as raw codes on GitHub or just a plain line in your resume. It’s always a good idea to show and not tell when describing your expertise. So, follow along and learn how to serve all those models which you trained, using Azure Functions, a serverless framework designed to run your code on the cloud without worrying about resource allocation.
In this article, we’ll go through the following things:-

  • Train a classification model using Azure Cognitive Services.
  • Initialize a local environment for developing Azure Functions in Python.
  • Import a custom TensorFlow machine learning model into a function app.
  • Build a serverless HTTP API for classifying an x-ray image into two classes: Pneumonia and Normal.
  • Consume the API from a web app.


We will be classifying various chest radiographs into two categories namely: Pneumonia and Normal, using Microsoft Cognitive Services.
I’ve already trained the model and provided the model file, if you just want to learn how to deploy a model using Azure Functions, you can skip to section II.


Requirements Check:

  1. In a terminal or command window, run func --version to check that the Azure Functions Core Tools are version 2.7.1846 or later.
  2. Run python --version (Linux/macOS) or py --version (Windows) to check your Python version reports 3.6.x.

Section I: Training the model

We’ll learn how to build an image classifier using the Azure Custom Vision website.


For our problem, we’ll use a publicly available dataset on Kaggle:

The dataset is organized into 3 folders (train, test, Val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal).

Microsoft Custom Vision

  • Start your browser and navigate and log in to the Azure portal.
  • Search for Cognitive Services in search resources bar.
  • Select Create Cognitive Services and you’ll be directed to the marketplace.
  • Search for Custom Vision and then select create and fill in the following details.
  • Leave the rest to default and select review+create .
Successfully deployed custom vision service
  • Once you see the above screen click Go to resource and select Custom vision portal and sign in using the same account.
  • Now create a new project and enter the following details (edit as you like):
  • Note: The Free tier allows you to train up to 5000 images. Once you download the Kaggle Dataset, the chest-xray folder will contain train folder which has two subfolders: Normal and Pneumonia . Delete 250 images from the Pneumonia folder to make total images less than 5000.
  • Navigate to the newly created project and click on Add Images , select all the images from Normal folder and Upload. You’ll be asked to add a tag for the batch of images, specify any appropriate tag. Repeat the same process for Pneumonia images.
  • Once all the images are uploaded, select Train and choose Quick Training . The free tier allows up to 1 hour of training per month. To get improved performance and accuracy on complex datasets, one can choose Advanced Training, make sure you have sufficient credits for it.
  • Once the training is complete you’ll see something similar:-
  • Precision and Recall tell us how well our model performs, Precision tells us what fraction of positive identifications were actually correct, and Recall identifies what fraction of actual positives were identified correctly.
  • Finally, select Export and choose the platform as Tensorflow , from the dropdown select Tensorflow and then download.
  • The download is a zip archive containing two files: model.pb and labels.txt. You’ll copy these files into your Azure Functions project.

Section II: Azure Functions model serving

  • You can clone the following git repository, which contains the model file (model.pb) and labels.txt
git clone
  • Now, Let’s get started with some interesting stuff.
    First, clone the following repository:
git clone
  • This is our standard helper code to implement Azure Functions serverless deployment. The contents of the folder are as follows:
    * start is your working folder for the tutorial.
    * end is the final result and full implementation for your reference.
    * resources contain the machine learning model and helper libraries.
    * frontend is a website that calls the function app
  • Navigate to the start folder and execute the following commands to create and activate a virtual environment named .venv. Be sure to use Python 3.6, which is supported by Azure Functions. We do this so that the dependencies don’t clash with the already existing packages.
BASH:-cd startpython -m venv .venvsource .venv/bin/activateIf Python didn’t install the venv package on your Linux distribution, run the following command:sudo apt-get install python3-venvCMD:- cd startpy -3.6 -m venv .venv.venv\scripts\activate
  • We’ll run all subsequent commands in this activated virtual environment. (To exit the virtual environment, run deactivate.)
  • The Azure functions container consists of various functions which are designed to respond to specific triggers. In our case, we design a function to accept an image and respond with the prediction that our model outputs.
  • While in the Start folder, use Azure Functions Core Tools to initialize a Python function app:
func init — worker-runtime python
  • The above line downloads various configuration files along with .gitignore so that we don’t accidentally publish our account secrets.
  • We can create a function using the following command, where the --name argument is the unique name of your function and the --template argument specifies the function's trigger. func new create a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named function.json.
func new --name classify --template “HTTP trigger”
  • The above command creates a folder named classify and consists of two files:, which contains the function code, and function.json, which describes the function’s trigger and its input and output bindings.

Testing App Locally

  • While in the start folder, execute the following command to get our function running, this will create a local runtime in the start folder.
func start

Import and serve the Tensorflow Model

  • Copy the model files namely, model.pb and labels.txt into the classify folder.
  • Next, copy the helper code into the classify folder (make sure you are working in the start directory), verify that the classify folder contains all the required files:
BASH:-cp ../resources/ classify
copy ..\resources\ classify
  • Open start/requirements.txt in a text editor and add the following dependencies required by the helper code, and save it:
  • Run the following command to install the dependencies:
pip install --no-cache-dir -r requirements.txt

Configure Function to run predictions

  • Open classify/ in a text editor and add the following lines after the existing import statements to import the standard JSON library and the predict helpers:
import logging
import azure.functions as func
import json
# Import helper script
from .predict import predict_image_from_url
  • Replace the entire contents of the main function with the following code:
def main(req: func.HttpRequest) -> func.HttpResponse:
image_url = req.params.get('img')'Image URL received: ' + image_url)
results = predict_image_from_url(image_url) headers = {
"Content-type": "application/json",
"Access-Control-Allow-Origin": "*"
return func.HttpResponse(json.dumps(results), headers = headers)
  • This function receives an image URL in a query string parameter named img. It then calls predict_image_from_url from the helper library to download and classify the image using the TensorFlow model. The function then returns an HTTP response with the results.
  • Save all the changes and start local function host again using :
func start
  • Test the hosted service: In a browser, open the following URL to invoke the function with the URL of an Xray image and confirm that the returned JSON classifies the image as either Pneumonia or Normal.
Sample Images:
RESULT (you should be able to see similar response in the browser):-
{"created": "2020-09-03T16:16:01.225972", "predictedTagName": "Pneumonia", "prediction": [{"tagName": "Pneumonia", "probability": 1.0}]}
  • Yay! You’ve successfully hosted your machine learning classification model locally using Azure Functions! Now let’s host the app with a basic frontend to make it presentable.

Run basic front end locally

  • Navigate to the repository’s frontend folder.
  • Replace the index.html file with the one given at the following link:
  • Open a new terminal or command prompt and activate the virtual environment (.venv as described at the beginning of Section II).
  • Start an HTTP server with Python
BASH:- python -m http.serverCMD:- py -m http.server
  • Browse the localhost with port 8000 to see the front end in action.

You’ll see similar frontend after navigating to the localhost tab in your browser.

  • Upload any image URL to see the inference. You can find sample images here:


The final result will look like this:-

Hosting the function app

Install Azure CLI on your system :

To deploy our app using Azure Functions we need the following things:

  • A resource group
  • A storage account
  • A function app

Let’s create all the required Azure resources for our location:

  • Navigate to the start folder.
  • We’ll be using Azure CLI to deploy our function app. Run the following command and login with valid Azure account which you used before. You’ll get a JSON response once the login is successful.
az login
  • Next, we’ll create a resource group using the Azure CLI. The following example creates a resource group named classify-image in the eastus region. (You generally create your resource group and resources in a region near you, using an available region from the az account list-locations command). After executing the command you will be able to see “provisioningState”: “Succeeded” in the JSON response.
az group create --name classify-image --location eastus
  • Our Resource group is now ready. Next, we will create a general-purpose storage account in that resource group. In the following command, replace <STORAGE_NAME> with a globally unique name appropriate to you. Names must contain three to 24 characters numbers and lowercase letters only. Standard_LRS specifies a general-purpose account (Supported by Functions). Specify the resource group name and location which you created earlier. You’ll see the same succeeded message in the JSON response if everything is correct.
az storage account create --name <STORAGE_NAME> --location eastus --resource-group classify-image --sku Standard_LRS
  • Now we’ll create the function app using the following command. In the following example, replace <STORAGE_NAME> with the name of the account you used in the previous step, and replace <APP_NAME> with a globally unique name appropriate to you. The <APP_NAME> is also the default DNS domain for the function app. As we are using Python 3.6, change
    --runtime-version to 3.6.
az functionapp create --resource-group classify-image --os-type Linux --consumption-plan-location eastus --runtime python --runtime-version 3.6 --functions-version 2 --name <APP_NAME> --storage-account <STORAGE_NAME>

Deploy the App

  • Once everything is set up and you are ready to deploy the app, run the following command, replacing <APP_NAME> with the name of your app.
func azure functionapp publish <APP_NAME>

Once the above command executes, you’ll see a success response in the terminal:

In the response, you can see Invoke url, copy the URL, append
&img=<Image_URL> at the end and paste it in the browser. Eg:

You’ll get the similar response string which we got earlier while hosting the API locally.

Congratulations!!! The API is now ready to use. You can embed the link in your front end code, call the API and receive the prediction as a response.



cite: Kermany, Daniel; Zhang, Kang; Goldbaum, Michael (2018), “Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification”, Mendeley Data, V2, DOI: 10.17632/rscbjbr9sj.2







Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Share (whisky) data with the help of Azure — part 4

A Bootcamper’s Guide to the Technical Interview


Just Quiz-let.

Pass by value Vs. Pass by reference

Istumbler 103 4 — Find Local Wireless Networks


Mysql Restore Point Operation transaction

Future of Databases

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Parag Ghorpade

Parag Ghorpade

More from Medium

Let’s learn about RDD…

Setting up Spark in Jupyter lab

In memory computation in spark

Real Time Data Processing Using Spark Streaming