Writing and Testing Azure Functions in the v2 Python Programming Model
Microsoft recently released a v2 programming model for writing Azure Functions in Python.
This model presents a new design for building your functions in Python. The new design feels familiar to those that have experience writing Flask apps or other web app frameworks. Microsoft describe the v2 model as providing “a simpler file structure” and being “more code-centric”.
In this brief guide, I will walk through examples of HTTP trigger and blob trigger functions written using the new v2 model. For more context please refer directly to Microsoft’s Python developer reference guide. Please note at the time of writing, this feature is In Preview.
Throughout this blog I will reference some key differences between the v1 model and v2 model. However, foreknowledge of v1 is not a prerequisite. If you are unfamiliar with Azure Functions at all, then the code snippets and folder layout written below can still help you to write your first function.
Folder Structure
The template folder structure for developing in v2 looks like this:
<project_root>/
| -.venv/
| -function_app.py
| -blueprint.py
| -helper_functions.py
| -tests/
| | -test_my_function.py
| | -test-requirements.txt
| -.funcignore
| -host.json
| -requirements.txt
Compared to the v1 programming model, the reduced number of directories is evident. The v2 model enables a less complex, flat folder structure. The v2 model removes the need for a directory and function.json
file per function and introduces blueprints and decorators instead.
For reference, here is the folder structure when developing the same piece of work in v1:
<project_root>/
| — .venv/
| — my_first_function/
| | — __init__.py
| | — function.json
| | — example.py
| — my_second_function/
| | — __init__.py
| | — function.json
| — helper_functions.py
| — tests/
| | — test_my_second_function.py
| — .funcignore
| — host.json
| — requirements.txt
As you can see, each v1 function has its own directory with a required configuration json file. The v2 model condenses this into function decorators inside the blueprint.
blueprint.py
import logging
import azure.functions as func
import pandas as pd
from io import BytesIO
bp = func.Blueprint()
@bp.route(route="hello_world", auth_level="anonymous")
def hello_world(req: func.HttpRequest) -> func.HttpResponse:
logging.info("Python HTTP trigger function processed a request.")
name = req.params.get("name") or "world"
return func.HttpResponse(f"Hello {name}", status_code=200)
The blueprint above is an example of a very simple HTTP trigger function. Those that are familiar with the v1 model will notice that the parameters previously specified in a function.json
file (such as route
and auth_level
) have been relocated to a decorator in the v2 model. This eliminates the need for the json file.
This simple function returns a ‘hello’ string and a status code of 200. An optional function name can be declared also, using the decorator @bp.function_name
. When not declared, the function name defaults to the name of the defined function.
I can add a second function to the same file:
@bp.function_name("blob_triggered")
@bp.blob_trigger(
arg_name="obj",
path=f"path/to/my/blob.csv",
connection="STORAGEACCOUNTCREDENTIAL",
)
def custom_blob_trigger_function(obj: func.InputStream):
logging.info("Python blob trigger function processed a request.")
# read the blob into memory
blobject = obj.read()
blob_to_read = BytesIO(blobject)
df = pd.read_csv(blob_to_read)
logging.info("Reading blob csv with length:" + str(len(df.index)))
# Continue to perform whatever logic is needed below
This is a blob-triggered function. Instead of a route
(as used in the HTTP triggered function), I specified a path to the blob that should trigger this function to run. Access to the storage account where the blob resides is granted via connection string in the connection
kwarg of the decorator. When deployed, this will be read from an environment variable of the same name.
Once triggered, the blob is read into memory as a pandas dataframe which I can use for further analysis and manipulation.
function_app.py
With the blueprint created, I can create a very simple function_app.py
:
import azure.functions as func
from blueprint import bp
app = func.FunctionApp()
app.register_functions(bp)
This reads the blueprint into the FunctionApp. The standard name of this file will be found by the Function App at deployment.
Unit testing
Next I can write a couple of tests for my two new functions.
from unittest.mock import patch, ANY
import azure.functions as func
from blueprint import hello_world, custom_blob_trigger_function
def test_hello_world():
# Construct a mock HTTP request.
req = func.HttpRequest(
method="GET",
body=None,
url="/api/hello_world",
params={
"name": "General Kenobi",
},
)
# Call the function.
func_call = hello_world.build().get_user_function()
response = func_call(req)
assert response.status_code == 200
def test_custom_blob_trigger_function():
# write some mock blob data
blob_data = "here, is, some, blob, data\n1,2,3,4,5"
# put it to the format that the function is expecting
req = func.blob.InputStream(data=blob_data.encode("utf8"))
# Call the function.
func_call = custom_blob_trigger_function.build().get_user_function()
func_call(req)
# assert whatever logic needs to be tested
host.json
For completeness, host.json
looks like this:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.*, 4.0.0)"
}
}
This is the same as the standard template. Since we only need 1 host json file for all functions in the blueprint, I need to save this at the root level directory (see folder structure above).
Conclusions
The v2 model offers a more concise project structure with a simplified folder hierarchy and configuration files. This makes developing multiple functions in a single Function App more streamlined. It is also a recognisable pattern for those familiar with Flask.
Unit testing v2 functions is easy to configure using mocked inputs. Detecting logic errors in simple tests such as these (perhaps as part of a CI pipeline) is not only best-practice but can save time having to debug deployed function code.