Things I Learnt in My First Azure Functions Project

Zhongming Chen
ASOS Tech Blog
Published in
8 min readJun 11, 2018

As part of the EU General Data Protection Regulation (GDPR) requirement, we built a system in Visual Studio 2017 to conduct data minimisation using time and queue trigger Azure Functions hosted on consumption plan. Here are a list of gotchas and tips worth sharing in the hope they can be helpful to others who are new to Azure Functions development.

Make sure if it is right for you

Azure Functions is powerful but it isn’t a solution for every business requirement. There are two different hosting plans for the Azure function application to be running on. The following is a brief comparison on the differences between them. This page gives you a very good insight of what pros and cons each one has, and how they work.

The reason we chose Azure Functions with consumption plan is because it’s very well integrated with Azure services e.g. Cosmos DB, Storage queue via binding. This massively simplifies codes and lets us focus on things that are more relevant to our business requirement. Plus, the built-in retry feature comes free out-of-the-box in the queue trigger function.

Choose the right version

At the time of writing this post, the only version of Azure Functions that is officially recommended for production use is version 1.0, which doesn’t support .NET core. To use .NET core, you would have to use Azure Functions 2.0 which is under preview at the moment and has a list of known issues worth checking.

Best practice, best practice, best practice

Azure Functions development isn’t the same as others, e.g cloud service or Azure website etc., which have their own dedicated virtual machines. Even more so if you choose Azure Functions consumption plan, which is different in terms of its cross-function communication, scalability and costing calculation etc. All of these could significantly impact your functions cost and performance if you do not fully understand how they work — this is why it’s important, as a developer, to have these differences in mind while coding. Here is a good article showing you one of best practices — definitely worth a read. It’s also always worth asking the Azure Functions team whenever there isn’t an obvious solution to a complex or unique design requirement you have, instead of rushing out a solution yourself.

Azure Functions 1.0 runs in IIS worker process

Kudu is an engine behind git deployments in the Azure app service and it runs on its own w3wp process. After you deploy app service to Azure, you can access Kudu in the platform features.

Once you are in Kudu, open the Process Explorer, and you will see that Azure Functions version 1.0 is hosted in a IIS worker process w3wp.exe :

Then, your function assemblies get loaded within a w3wp process which acts like a container. The following is an example demonstrating how it is initialised:

D:\Windows\SysWOW64\inetsrv\w3wp.exe -ap "YourFunctionName" -v "v4.0" -a "\\.\pipe\iisipmbfc62ec4-5183-4ebd-9999-ae18ec4527e5" -h "C:\DWASFiles\Sites\YourFunctionName\Config\applicationhost.config" -w "C:\DWASFiles\Sites\YourFunctionName\Config\rootweb.config" -m 0 -t 20 -ta 0

Understand Azure Functions app settings

Just like any other .NET application, Azure Functions can also read app settings by using the System.Environment.GetEnvironmentVariable method. In terms of how the app settings correlate with functions, it’s probably not as you think it is.

For local, app settings are defined in the local.settings.json file which is only for local development use, even though it is already obvious by the name of it. However, I found a lot of developers (including myself) still mistakenly think settings defined here would take effect in Azure — the answer is absolutely not. To add a custom setting to your local.settings.json simply add a new entry in Values .

// local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"AzureWebJobsDashboard": "UseDevelopmentStorage=true",
"MyCustomSetting":"a custom setting"
}
}

To define app settings for use in Azure, it’s done via app service deployment in the Azure resource management (ARM) template. Here is a quick start template for reference. As you can see in the azuredeploy.json template, app settings are defined as part of the Microsoft.web/sites resource, which is required beforehand, for your function deployment afterwards. So Azure Functions and app settings that function consumes, are deployed separately.

Limit queue trigger function scaling

One of the great features that the Azure Functions has is that the Azure Functions Runtime automatically scales out function instances when it is under load. In our case, it scales out depending on the volume of messages in the queue it’s listening to. However, this feature could cause problems to downstream APIs that it depends on. Instead, what we want is to limit its auto-scaling ability to avoid possible DDOS ourselves, in case of a huge amount of messages arriving. Fortunately, the queue trigger Azure function allows you to constrain its power by using the following two settings in the host.json file:

"queues": {
// This limits number of queue messages that the functions runtime retrieves simultaneously to 1.
"batchSize": 1,
// This instructs the runtime to retrieve another batch when no message is being processed.
"newBatchThreshold": 0
}

This basically means that the maximum number of concurrent messages that can be processed per function at any time is one (batchSize plus newBatchThreshold). Although, with this approach you might think it could still cause problems if each function execution completes extremely fast, one after another, which is absolutely possible. We therefore put a performance test in place to give us an extra level of confidence and its report shows the average duration of each function execution takes approximately 1.2 seconds — I know, too slow! However, the good news is it’s nowhere near our downstream API rate limit. Happy days!

Be aware of assembly redirect binding issue on Newtonsoft.Json

Since the version of 1.0.0-alpha6, Azure Functions SDK now strictly requires Newtonsoft.Json version 9.0.1 to prevent runtime failure. It can be frustrating sometimes as it limits you to use the latest Newtonsoft.Json package. Also, because of the popularity of the Newtonsoft.Json package, it’s being referenced by many other packages, so the chances of you running into this problem is very high.

In our case, this issue stops us using the WindowsAzure.Storage v9.2.0 package, which has a feature that supports accessing Azure storage account via Azure AD.

An example shows the Azure SDK 1.0.10 strictly requires version 9.0.1 of Newtonsoft.Json

Access client certificate in Azure Functions

Accessing certificate is a very common scenario, typically when acquiring a token to access a remote protected resource. There are few ways to achieve this. Before the Azure Functions team made it available, I’ve seen some people do it by reading from the file system, which works for a public certificate but it wouldn’t be secure enough for reading a private certificate.

Now, accessing client certificate is pretty straightforward but there are few things worth noting:

  • How to provision a certificate onto the Azure app service. At the time of writing this post, the proper way of doing this is still a feature request. The way we get around it is done through the New-AzureRmWebAppSSLBinding Azure power shell script, which is designed for uploading an SSL certificate, but it can still upload a client certificate even if there’s an error that the domain isn’t matched with the subject name of the certificate. To get around it, you can suppress the error by specifying the ErrorAction parameter which has the following options and which will ignore all possible errors, or you can be more specific by checking the error using a try and catch block.
[:{Continue | Ignore | Inquire | SilentlyContinue | Stop | Suspend }]
  • How to make certificate accessible. To make certificates available, you need to have WEBSITE_LOAD_CERTIFICATES app setting with a value set to the thumbprint of the certificate you want to read. Alternatively, you can set it to multiple thumbprints separated by comma or simply set it to * to read all available certificates.
  • How to verify certificate are available. To verify if your certificates are accessible, you can install the Certificate Read Checker via Site extensions gallery on Kudu, which will show you if your certificate can be read via code or not.
  • How to read a certificate in code. Because Azure Functions with consumption plan is hosted on a public scale unit, certificates can only be installed under CurrentUser personal store. This shows an example of how to read a certificate in C#.

Integrate with application insight

There are two built-in logging methods in Azure Functions, either the Webjob dashboard which uses table storage, or Application Insights which is much more advanced and robust. The following shows you how to provision and integrate Azure Functions with Application Insights.

Integrating with Application Insights is really straightforward. All you need to do is add your instrumentation key to your function app setting and Azure Functions will take care of the rest. For more information, see here.

APPINSIGHTS_INSTRUMENTATIONKEY = "Your-Instrumentation-Key"

In your ARM deployment template use the reference function to retrieve an instrument key during your function app provisioning and add it to the Azure function app setting list.

{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "[reference(resourceId('Microsoft.Insights/components', parameters('YourAppInsightsInstanceName')), '2014-04-01').InstrumentationKey]"
},

If you choose to use Application Insight for logging, it is worth disabling the Webjob dashboard logging for cost saving purposes. Otherwise it still writes logs to table storage which costs money. To do so, simply remove AzureWebJobsDashboard from app settings.

How to implement dependency injection in Azure Functions

Unfortunately, dependency injection (DI) isn’t supported out-of-the-box and this makes a developers life much harder because the size of the project grows as more services are added. Boris Wilhelms suggests something pretty elegant to counter this, which is to inject dependencies via the function entry method using the binding attribute and then register all services in the function binding extension. If you would like to have DI in your function project, the link above is all you need.

Most of the time you want your services to be able to log information or provide warnings when certain conditions occur. Once you have set up the DI using the above approach, you can then create the built-in logger to be available as a dependency to your service. The following shows an example of how to do that:

public class RegisterDependencies : IExtensionConfigProvider {
public void Initialize(ExtensionConfigContext context){
var container = new WindsorContainer();
var logger = context.Config.LoggerFactory
.CreateLogger<Processor>();
container.Register(
Component.For<ILogger>().Instance(logger),
Component.For<IServiceNeedLogger, ServiceNeedLogger>()
);
context.AddBindingRule<InjectAttribute>()
.Bind(new InjectBindingProvider(container));
}
}

Above is a list of notes that I hope can be helpful. If you have any questions please leave a comment here, I will try my best to answer.

--

--