Nowadays cloud and hybrid cloud solutions are highly demanded by companies. In this post, we will talk about a technical solution for business logic that it’s supported by multiple clouds provides and can run on-premises without any kind of issues.
The requirements for the given scenario is the following:
We need to be able to expose an API that can execute tasks that runs for 2–5 minutes each in parallel. The native platform shall be Microsoft Azure, but it should be able to run on dedicated hardware in specific countries like China and Russia. With minimal effort, it should be able to run on the AWS platform.
The current team that we had available was a .NET Core team, with good skills on ASP.NET. There are numerous services available on Azure that can run in different environments also. The most attractive ones are the ones on top of Kubernetes and microservices.
Even so, we decided to do things a little different. We had to take into considerations that an autoscaling functionality would be important. Additional to this the current team needs to deliver the first version on top of Azure. The current team capabilities were not so good on Kubernetes and the delivery timeline was strict.
Taking this into considerations, we decided to go with a 2 steps approach. The first version of the solution would be built on top of Azure Functions 2.X, fully hosted on Microsoft Azure. The programming language that would be used is .NET Core 2.X.
To be able to deliver what is required the following binding needs to be used:
- HTTP that plays the role of the trigger. External systems can use this binding to kick off the execution.
- Blob Storage that plays the role of data input. The chunk of data is delivered in JSON format from the external system and loaded to blob storage before execution starts.
- Webhooks that plays the role as data output. An external system is notified where the result (scoring information) is provided
The power of Azure Functions is revealed at this moment. The team can focus on implementation, without a minimal effort on the infrastructure part. Initially, the solution will be hosted using Consumption Plan that enables us to scale automatically and pay per usage.
Even if it might be a little more expensive than the App Service Plan, where dedicated resources are allocated, the Consumption Plan is a good starting point. Later, based on the consumption level the solution might be or not migrated to App Service Plan.
When you are using the Consumption Plan, a function can run a maximum of 30 minutes. The initial estimation of a task duration is 2–5 minutes. There is a medium risk that the 30 minutes limits to be reached. To be able to mitigate this risk, during the implementation phase, the team will execute some stress tests with real data to estimate the task duration in Azure Function context.
Additional to this, every week a custom report will be generated where different KPIs related to execution times. A scatter charts, combined with a clustering chart shall be more than enough. The reports are generated inside Power BI.
The mitigation plan for this is to migrate to the App Service Plan, where you have the ability to control what kind of resources you have available and to allocate dedicated resources only for this. On top of this, in App Service Plan you don’t have a timeout limitation, you can run a function as long as you want.
There are also other mechanisms to mitigate it, like code optimization or split the execution into multiple functions, but this will be decided later on if there will be problems with the timeout time.
Remarks: Azure Functions 1.X execution timeout for Consumption Plan is 10 minutes, in comparison with Azure Function 2.X where the maximum accepted value is 30 minutes.
The current support of Azure Function for t on-premises system if you want to have a stable system where you can run high loads is Kubernetes cluster. The cool thing that Microsoft is offering to us is the ability to run Azure Functions inside Kubernetes cluster as a Docker image.
The tool that is available on the market at this moment is allowing us to create from an Azure Function a Docker image that it is already configured with Horizontal Pod Autoscaler. This enables us without to do any custom configuration to have an Azure Function hosted inside Kubernetes as a container that can scale automatically based on the load. Beside this the deployment and service configuration part it’s also generated.
The tool that is allowing us to do this is called Core Tools and it is designed by the Azure Function team. Besides this, because it is a command line tool can be easily integrated with CI/CD systems that we have already in place.
The same solution as for on-premises can be used to host our Azure Functions inside AWS EKS or in any other services based on Kubernetes.
The official support from Core Tools is allowing us to create images for Docker and deploy them in Kubernetes using/on top of:
- AKS (Azure Kubernetes Services)
- ACR (Azure Container Registry) — Image hosting
- ACS (Azure Container Services)
- AWS EKS
Azure Functions Core Tools is available for download from:
- Github — https://github.com/Azure/azure-functions-core-tools
- npm- azure-functions-core-tools
- choco — azure-functions-core-tools
- Brew — azure-functions-core-tools
- Ubuntu — https://packages.microsoft.com/config/ubuntu/XX.XX/packages-microsoft-prod.deb
As we can see at this moment Azure Functions is allowing us to not be locked to run our solution only inside Azure as Functions. We have the ability to take our functions and spin-up as a console application or even inside Kubernetes. Azure Function Core Tools is enabling us to create Docker images and run them inside Kubernetes in any kind of environment.