Azure Functions

Merve Topal
adessoTurkey
Published in
16 min readJul 21, 2023

--

Hello,

I hope you are well! In this article, I will tell you about Azure Functions. My advice to you is to read my previous article, where I discussed Azure Storage, before this article. Because both are on very related topics. At the end of the article, I will be sharing a link to a project where I use Azure Storage and Azure Functions together. I separated the topics because both are quite comprehensive, and combining them would result in a much longer article. After this short explanation, I want to start right away.

Have a pleasant and useful adventure, again! 😊

Azure Functions is a serverless (FaaS — Function as a Service) cloud service from Microsoft that allows you to run small pieces of code without worrying about the application infrastructure.

Serverless architecture is a structure that is growing in popularity. Servers are also used in serverless architectures, but only service providers perform this management. Now, onto Azure Functions… If the term “function” is included in the name, it means that there is a method involved. If there is a method, there is something to work with. Where will this code run? Normally, for this, you would need to get a server, web hosting, an app service, etc. on Azure. This is where Azure Functions come into play. In fact, they tell us, “Don’t worry about the infrastructure; don’t worry about where your code will run; that’s our job.” Whether it’s IIS, Kestrel, Apache, etc., we can see the running version of our code by taking it directly to production without any hassle. This is the service that Azure Functions offer us, which is also called “serverless”. In other words, the process of running a function without installing any server infrastructure. It has become more popular, especially with microservices. Functions are available not only on Azure but also on all cloud systems, such as AWS, Google Cloud, Alibaba Cloud, IBM Cloud. It is very simple to use and code. We can get our code into production in a short time without dealing with infrastructure.

Well, with so much convenience, there comes a cost. But don’t worry; it’s cost-effective. 😉

When we publish the code that we wrote with Azure Functions in Azure, we only pay for what works. The more RAM our function used while running and the longer it worked, the more payment is offered. If the function we received for the production is not working, no payment will be needed. Of course, there are future payments. Because it is necessary to use a few mandatory services with Azure Functions. Azure enforces this. One is Azure Storage. These are not huge fees, which I will talk about later.

In summary, it is serverless and pay-as-you-work.

We said that we can easily take our method, which we call a “function”, and into production. Another feature is that it can scale automatically. You are doing some coding, but you don’t want to do it inside your current application. For example, you want to process an image. You can use the Azure function without doing this within the app. Azure Functions will be able to scale automatically. In other words, the more your function is triggered, the more it will be able to respond to these triggers. 10 requests per second, 10,000,000 requests per second… All of these are Azure’s problem. It will be done independently from you. 😊 Normally, we would get an annual host plan to run our code, or we would set up an app service on Azure. We were paying a monthly fee, whether we used it or not. But here, it’s a pay-as-you-go system.

Azure Function Hosting Plan Types

When a function is created in Azure, you need to choose a hosting plan to determine how it scales and behaves. Application hosting plans are designs that also allow you to take advantage of special infrastructures and provide resources available for all types of function instances.

Hosting plans are three in number:

  • Consumption plan,
  • Premium plan,
  • Dedicated (App Service) plan.

Consumption plan:

It is a code based on the structures we call “functions”. Therefore, in order to run this code, we first need a function host. We call each function host an instance. When a function is defined, whether there is an instance with that function is important for us in performance evaluation. There is no default instance running unless there is a trigger in the Consumption plan, that is, the function is not run. Whenever a trigger occurs, then the instance will be raised. In this case, it will provide an extra delay for the first request. Of course, this delay will not be because the first created instance will be used in subsequent requests.

The Consumption plan scales automatically and is charged only when the relevant function is triggered. As a matter of fact, it depends on the pay-as-you-go principle. Billing is based on factors such as the number of requests, function time, the amount of memory used, etc.

The hosts where the functions are run are called Function Host, and each function host is called an Instance. The functions created with the Consumption plan are the functions that are expected to finish within 10 minutes. If a function will run for more than 10 minutes (even 10 minutes is too much time for a function to run), the Premium plan should be preferred. The default plan in Azure Functions is the Consumption plan.

Premium plan:

According to the Consumption plan, one default instance always stands. Thus, the incoming request is either first or n. Regardless, there will be no extra cost to set up an instance. This default instance number can be multiplied as much as desired, and the maximum function time to run in an instance can be determined manually.

The premium plan gives unlimited functional time, with up to 60 minutes of warranty per function. It is multifunctional and more suitable for heavy operations. It makes pricing more predictable according to the weight of the function to be performed. Billing function time is calculated by the number of default instances, the number of cores used, and the amount of memory used. As a result, it is inevitable that there will be a minimum monthly cost in the premium account due to the default instances. It is obvious that it is subject to an extra fee. It has adopted the pay-as-you-go principle.

The Premium plan will keep the function running continuously without any requests, thanks to the default instance. There are more CPU and memory options than the Consumption plan, and it should be preferred for operational functions lasting longer than 10 minutes compared to the Consumption plan.

You can define multiple Azure Functions in an Azure Storage. Multiple definitions do not have any negative impact on Azure or even affect scalability and reliability in the slightest.

Dedicated (App Service) plan:

A plan that charges for the consumption of resources in a domain taken in Azure. It is not automatically scaled.

Let’s look at a few useful features!

  • It has more than one SDK, while coding C#, Java, Node JS, Python… So, there is a wide range of language support. Of course, we will proceed based on C# in this article.
  • You can use npm packages on the client side and NuGet packages on the server side. Custom or NuGet package makes no difference. I should also point out that using too many packages will increase the uptime of Azure Function. Because these are light services.
  • Being able to work in full integration with other services of Azure. Because it’s in Azure. So, what does this mean? For example, you are using blob storage in Azure, you can say that your function should be triggered when an image is saved to the blob.
  • It can be integrated into services by using bindings without writing any connection code with related services such as table storage, blob storage, etc.
  • Azure Function runtime is open source.

Services that Azure Functions is connected to:

Azure Functions actively use Azure storage. The information of an Azure function that we have created in the Function App (which we will talk about in the future) is saved in Azure storage. This means that an Azure storage is required for each Function App we create. So, can I create an Azure storage account and connect it to more than one function? Yes, I can connect. Best practice is to create a storage for each function app. So, I’m going to create a storage for each function? No. A Function App can have multiple Azure functions.

Azure storage is a must for an Azure function. When you create an Azure function, it will definitely ask you for a storage account.

Another is Application Insights… This is a monitoring process of the number of function requests, response times, error states, etc. It is an environment that we can observe. It would be appropriate to connect Azure Functions to an application insight service. It is enabled by default, but we can disable it if we want. When we use it, we can see the status of the application instantly. It shortens the fault detection time. It is intended to understand the behavior of the application.

Here, the payments for these two services that Azure Functions is connected to are separate. Application Insights is initially free to a certain extent. Even if our Azure function code doesn’t work, you pay for these two friends. However, these payments are not very large payments. Payments will remain low, as no transaction will be opened to them when Azure Function is not running. When the Azure function runs, it starts using these services. Application Insights is one of those rare applications that deserves its money.

Azure Functions creation methods:

➜ Visual Studio vs. Visual Studio Code

Ide-based

➜ Azure Portal

We can create an Azure function through the website.

➜ Azure Function Core Tool

The command line actually uses a template when creating functions in Visual Studio (VS) and Visual Studio Code (VS Code). This command line is also used in the background. Azure Functions’ operation and local standing depend on this command line. It must be installed on our machine. Moreover, when we install the SDK, we also get the Core tool. Although it is not the preferred method, we can create a function via this command line if we want.

I’m going through VS here.

Creating Functions with Visual Studio (VS):

With VS, we can create functions very easily by choosing ready-made templates for Azure functions and test them locally. We need a tool to perform this operation. This tool is the Azure development tool. We will install this tool on our IDE and start working with functions. This tool is not just specific to functions. On Azure, it allows us to code and publish many services without ever going to the Azure portal. The second is Microsoft Azure Storage Explorer. Guys, this is an emulator. I gave detailed information about what happened in my “Azure Storage” article; you can check it out. In short, this emulator creates an Azure environment locally without going to the Azure portal.

Visual Studio ➜ Tools ➜ Get Tools and Features ➜ Visual Studio Installer ➜ Azure Development

After installing the Azure development tool we can choose

New Project ➜ Azure Function

The ready template will be available.

Now, Azure functions must be triggered to run. We will choose a trigger type when creating the function. Here, you will encounter many triggers. We will go through Http Trigger, which we will use frequently and are likely to use. It is also easy and understandable to test and explain. But I used the queue trigger in the project whose link I left.

As seen in the photo, it asks you for storage account information. Now here is what we need to know: The http trigger gives us an endpoint. When we send a request to this endpoint, our function is triggered.

Let’s explain step by step through the photo.

Here, I have a method named “run”. As you can see, it is marked with async, which means we can use the async method in it. Notice that it is a static method, but it may not be static either. Especially when the DI state is entered, our methods and classes will not be static. By defining a static class and a static method, the default comes like this. The name of our static method is “Run”. You can change this name if you wish. Here, a FunctionName attribute is specified to indicate that this friend is a function. Thanks to this attribute, our method here becomes a function. Then it is getting an attribute called Http Trigger; the reason it got it is because this method will be triggered with an http request. So, what is this request, which endpoint will I work with? When we run the application, it will give us an endpoint. Our function will be triggered when we make get and post requests to that endpoint, as specified in the code sample.

Of course, if we want to put, delete, etc., we can also create functions with requests; it will be enough to specify. An authorization level is also specified here; I will explain it later. In the Route section, we will be able to direct our method from here. As you can see, it has received a parameter of type HtttpRequest. Thanks to this parameter, you can do whatever comes to mind about a normal request. You can think of it like an action in an API project. There, too, an action was returning the IActionResult type. Depending on the trigger type you choose, the return type may change, or nothing may return. Also, the logging object has been injected into the method, so we can log as we want. OkObjectResult is specified as the return value. This indicates that an object will be sent with 200 status codes. If you want, you can send the status code just by saying OK.

Azure Function Trigger Types:

There are many trigger types. You will see them when you create VS functions. There are other triggers other than http and timer triggers that use Azure services. However, http trigger and timer trigger do not use these services. In other words, it does not need to be triggered, we can trigger these two friends with the commands we give from outside. Queue Trigger and Blob Trigger work in integration with the Queue and Blob services in Azure storage.

You can use the Queue Trigger if you have a process that will be triggered when a message arrives in our queue. For example, for a queue trigger, you can code your queuing process with .NET, you can code your function with Java because your function will run when a message arrives in the queue independent of the application.

Likewise, you can use the blob trigger if you have something to do when a file is saved to blobs. If you do not want to tire your main server for this process, it would be ideal to use a function.

Since the function will scale automatically, your process will proceed successfully.

Cosmos DB Trigger is an upgrade of Cosmos DB Azure table storage (comes in Azure storage). Both are a kind of noSQL DB that Azure offers us. I will not talk about the other trigger types you will see while creating a function, because the others are subjects that have their own atmosphere.

Azure Function File Structure

Whatever trigger type you choose, the same file structure applies. If we want, we can create more than one Azure Function Project in a solution, which already has a general structure, or you can create different functions within the function project you created. You can even create more than one method into the function class that is created after you create your function. Of course, provided that the functionName attributes are different. What I mean is that you can create a separate function class for each of your functions, or you can create multiple trigger methods within a single function class.

When we create a function project, two files come as host.json and local.settings.json. Here, information about runtime is kept in host.json and when you publish, this information goes to the portal, such as the information about logging. When we look at local.settings.json, the information here remains local and does not go to the portal.

As you can see inside this file there is a runtime environment (dotnet). There is also a key field specified as AzureWebJobsStorage. We said that while the function is running, it needs to use storage. Since we are currently working locally, it is specified as “UseDevelopmentStorage=true”. This information also reflects this situation. When we go to the cloud environment, the connection string of the storage account we are connected to is added to the AzureWebJobsStorage field in the configuration section.

What we need to know is that when we take our code to the production environment, this file does not go to production. On the other hand, the constant data we will use in the function, we can define them here. You can think of it like the appsettings file in a normal core project. The following question may come to your mind, if it does not go to the cloud, how will we obtain this information there? We will be able to enter this information in the interface that comes up while publishing to the cloud.

In the Dependencies section of your project, as I mentioned before, we can use npm packages on the client-side and NuGet packages on the .NET server side.

Function Authorization Level:

If you can tell from the photo, we specify the authorization level when creating the function. Let’s examine them now.

Function — It is a level where you access your method at the function level, that is, with a code called the function key (you can think of it as a token) that you will get from Azure.

Anonymous — It is the level you would prefer if you wanted to access it without any external authorization.

Admin — When you want to access your method at the admin level, you can access it with a code called the master key, which you will get from Azure.

So, what’s the difference? On Azure, a default key is given for functions that are in the same function app. With the default key, you can access functions with authorization level i function, but not admin ones.

I should also mention that these levels don’t matter when working locally.

Dependency Injection:

How can we use the DI template? How can we pass our class or interfaces in the constructor of any class? Let’s talk about these.

Earlier versions of Azure Function did not support DI, but it is now available with the latest updates. In this way, we can reduce our dependencies.

Consider the startup class in any API core project. There, we would write a class and interfaces in our built-in DI container and then pass the constructor whenever we wanted to get an object of any class.

We will do the same here. For this, we need to use a NuGet package.

Microsoft.Azure.Functions.Extentions

First of all, we created a class called “Startup”, the name is not important. This class has inherited from FunctionsStartup. As you can see, we override the Configure method.

As you know, we will specify our dependencies in this configure method. Also, an assembly attribute has been added on the namespace. We also give the desired FunctionsStartup type to the class we created. As you can see, we can simply prepare our DI container. The point we will pay attention to is static by default; our function class is removing it. Because we will use DI.

Now we can pass in our constructor and use our object.

Publish to Azure Portal

Let’s talk about sending our functions to Azure in general terms, because explaining each step would make the article too long. :) Now, there are a few situations that we need to set when sending to the cloud. Since the first of these works with functions storage, which storage account will we work with in the cloud, and secondly, we need to set our environment variables. In other words, how the values we have specified in local.settings.json will be expressed in Azure, we need to specify them. You know because this file is not being sent to the cloud.

Now, when we create an Azure function project on VS, it corresponds to our Function APP on the Azure portal. Function APP that we have to create before creating function in Azure portal. Because an Azure function must be in a Function APP. For a Function APP, there should be a storage account, the best practice is to create a separate storage for each function app, to prevent all transactions from going to the same place in terms of performance, if you don’t mind the cost constraint and you have an important application to run in production. On the other hand, the authorization level we have chosen now gains importance when sending it to the cloud. I mentioned levels above.

We have options when publishing our function: We can publish directly to Azure, we can save it to the docker container register in Azure, we can save it to a folder, we can choose a profile. As a matter of fact, there is no case for Docker here, it already scales itself. Next, we choose the environment where our Function will run. It can be Windows, Linux, Container, Azure Container Register. By the way, Docker only works in the Linux environment, let’s pay attention to that too. As we talked about at the beginning, the infrastructure is not important, it is important that our function is alive. Publish situations are like this in general.

I’ve left an example of the Azure Function in my GitHub profile here.

https://github.com/adessoTurkey-dotNET/ MT.AzureStorageCreateFile

I hope it will come in handy, and thank you for reading.

Sağlıcakla kalın. :)

https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview?pivots=programming-language-csharp

https://github.com/Azure/Azure-Functions

https://www.geeksforgeeks.org/what-is-microsoft-azure-functions/

https://www.checkpoint.com/cyber-hub/cloud-security/what-is-serverless-security/what-is-azure-functions/

https://www.c-sharpcorner.com/article/what-is-azure-functions/

https://intellipaat.com/blog/what-is-azure-functions/?US

https://www.dynatrace.com/news/blog/what-is-azure-functions/

https://www.serverless360.com/azure-functions

--

--