Scale Azure Functions App on Kubernetes using KEDA

Arkaprava Sinha
.Net Programming
Published in
6 min readMar 22, 2022

Today we will see, how to scale our Azure Function App instances running inside a Kubernetes Cluster.

Scaling: Scaling is widely used term these days, it means how capable an IT resource is of handling growing and decreasing demands in a right manner. It is one of the most important features of cloud computing. It is enabling business to meet their growing demands according to season, expansion, projects and many more.

Horizontal Scaling: Horizontal scaling (aka scaling out) refers to adding additional nodes or machines to your infrastructure to cope with new demands. If you are hosting an application on a server and find that it no longer has the capacity or capabilities to handle traffic, adding a server may be your solution.

Vertical Scaling: Vertical scaling (aka scaling up) describes adding additional resources to a system so that it meets demand. How is this different from horizontal scaling? While horizontal scaling refers to adding additional nodes, vertical scaling describes adding more power to your current machines. For instance, if your server requires more processing power, vertical scaling would mean upgrading the CPUs. You can also vertically scale the memory, storage, or network speed.

KEDA: is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed. KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the Horizontal Pod Autoscaler and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks.

How KEDA works? Credits: https://keda.sh/docs/2.6/concepts/

Today we will scale out our Azure Function App to multiple pods depending on the certain conditions, such as queue depth, cpu percentage, http request count etc.

So let’s start,

We will be using below items,

  1. VS 2022 → For Creating our Function App
  2. Docker Desktop → For Local Kubernetes Cluster
  3. Azure Storage Emulator and Storage Explorer→ For Creating Azure Storage Queue locally and adding messages to the queue
  4. GitHub Workflow → For CI pipeline, to build our Image
  5. Argo CD → For Deploying our Application on Kubernetes Cluster
  6. Keda → For Scaling our Function App

Today I will not show you the Steps regarding, GitHub Workflow and Argo CD, you can follow below link,

So let’s start,

  1. At first we need to install Keda to our local Kubernetes Cluster, to do that run the below command,
kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.6.1/keda-2.6.1.yaml

It will install create a new namespace and expose the keda api for creating scaledobjects.

2. Install Azure Storage Emulator and run it after that open Storage Explorer create a Storage Queue.

Create Queue
You can Add Message
Add Message Window
Message Explorer

3. Now we have our KEDA running and Azure Storage Queue, Open Visual Studio 2022 and create an Azure Function App project with Azure Storage Queue Trigger Template.

Once done it will create a Function App, configure it by providing the connection string in local.settings.json

Right Click on the project and add Docker Support for adding Dockerfile. This Dockerfile will create our image.

TestQueue
Copy the Primary Connection for Storage Account from Storage explorer and add to the local.settings.json

4. Once done, we will create two folders in the root directory for github work flow and deployment YAMLs.

We will also add them to our Solution. (if don’t know why we are adding this please follow my earlier blogs as mentioned at the start)

Our Solution

Let’s run it,

You can see our Message “test” has been logged in the logs

So our function app is working.

5. Now lets create our deployments,

i. deployment-dev.yml, it will create our Kubernetes deployment, pod containing our function app container.

ii. secrets-dev.yml, it will create secret related to our Azure Storage Account connection string and queue name, which will be used in scaled object and also in our function app. Please remember to encode your secrets to Base64 before using it.

iii. service-dev.yml, it will expose our function app to be accessed over http. although it is of no use, as our function app will be triggered by Azure Storage Queue

iv. kedascaler-triggerauth-dev.yml, it will create authentication reference for our keda scaler.

v. kedascaler-scaledobject-dev.yml, it will create our scaler, which will check the queue depth in certain intervals and scale our application. if there was no messages in the queue, it will destroy the pod, and if a new message comes in it will again create the Pod and if queue depth is more than 1 then it will again create another pods.

So now our application is ready, pushed it to GitHub, Workflow will create an Image and upload it to GitHub Container Registry and create a Release. Please follow below link to have know-how GitHub workflow is working.

6. Let’s configure Argo Cd, for our deployment, follow below link

Once you configure and sync, it will look like below,

Currently it is showing out of sync as there is no message in the queue and scaler destroyed the pod

Let’s push some messages and see,

Pushed multiple messages

You can see multiple pods have been created to manage the demand.

Multiple Pods has been created

After few mins when all the messages are processed it will scale down to 0,

Now 0 pods available

Let’s see in our Kubernetes dashboard , what happened,

Our Deployment Events

As you can see it has scaled to 4 replicas, so our scaler is working.

Reference,

  1. GitHub Repo: https://github.com/arkapravasinha/KubeQueueFunction
  2. Keda: https://keda.sh/
  3. Argo CD with GitHub Workflow/Actions :

--

--

Arkaprava Sinha
.Net Programming

Senior Software Engineer@Walmart , Cloud, IoT and DevOps Enthusiast