Lifting your serverless app to on-premises with KEDA and K8s.

Or how-to move Azure Functions to your client data center.

Image for post
Image for post
Follow #AzureFamily and visit https://azurebacktoschool.tech for newest community content.

Why? A few years ago the main scenario was to lift application to the cloud, nowadays things changed ))). Last spring, I was asked to move an existing serverless application built around Azure Functions to on-premises for compliance purposes.

This publication is a part of a #AzureFamily community event, check other publications at https://azurebacktoschool.tech and follow #AzureBacktoSchool.

TL;DR; In this article, I will share steps to move a serverless application from a consumption plan to a client data center with the help of Kubernetes and KEDA for an event-driven cluster scale.

This is the first article in the series of three. The following two articles will be dedicated to the replacement of Azure Storage Queue with a Rabbit MQ broker and replacement of Azure SQL Database with SQL Server on Linux.

About 90% of customers are ok with Azure and cost-effective Functions Consumption plan at the cloud, but there are others, who need mixed edge scenarios or entire infrastructure in the private data centers. And sometimes 10% of customers can generate around 30% of annual profits :).

Kubernetes(K8s) usage is unfortunately inevitable and something everyone should know around at least at the basic level. The problem with K8s is a reactive scale that doesn’t provide a good fit event-driven apps, but Microsoft with open source community created KEDA to solve the issue.

With KEDA K8s cluster is able to scale based on the number of events in message broker, rather than consumed memory and CPU. While it's not really necessary in general scenarios, with Functions based on queue triggers is a recommended approach.

I will provide an end-to-end tutorial in this article based on the Azure Functions Core Tools, new Azure Function app, Azure CLI script for infrastructure and KubeCtl.

For an existing application you should add Dockerfile for a function app, install KEDA and create Kubernetes YAML manifest for container deployment.

The cloud architecture:

1. Message producer app - Azure Functions Consumption plan  with HTTP trigger and output Storage Queue trigger.
2. Message bus - Azure storage queue.
3. Message consumer app - Azure Function with Storage Queue trigger

The new on-premises Kubernetes architecture:

1. Kubernetes cluster(AKS to create a prototype in 30 minutes)
2. KEDA
3. Message producer app. - HTTP Function in docker
4. Message bus - RabbitMQ(or the same Storage queue at the moment).
5. Message consumer app - Azure Functions in docker, same triggers.
6. Good mood :)

But let’s proceed with the tutorial.

The action plan is pretty simple.

  • Install Azure Functions Core Tools(.NET Core 3.1) and Azure CLI.
  • Install Docker.
  • Deploy Kubernetes and infrastructure via Azure CLI script.
  • Scaffold new Azure Function application.
  • Create a docker container and test the application.
  • Deploy container to the private container registry (ACR).
  • Deploy container to Azure Kubernetes cluster (AKS).
https://gist.github.com/staslebedenko/0a0c7ea0cf21d1f29d74ab1db3190cb5

Save the storage account connection string, to use it below. Lets create new Azure Functions application or use my GitHub repository.

func init KedaFunctionsDemo — worker-runtime dotnet — docker
cd KedaFunctionsDemo
func new — name Publisher — template “HTTP trigger”
func new — name Subscriber — template “Queue Trigger”

Navigate to KedaFunctionsDemo folder and update AzureWebJobsStorage value with storage account connection string. Add output triggers and most importantly change authorization level on Producer function from AuthorizationLevel.Function to AuthorizationLevel.Anonymous.

Now you can start and test application.

func start --build --verbose
curl --get http://localhost:7071/api/Publisher?name=New%20Publisher
Image for post
Image for post

Now lets build and run docker container locally, but first we need to set a container name like ACR one from CLI script - k82Registry. Be aware that account connection string is needed for container start.

docker build -t k82Registry.azurecr.io/kedafunctionsdemo .
docker run -p -e docker run -p 9090:80 -e AzureWebJobsStorage={storage string without quotes} k82egistry.azurecr.io/kedafunctionsdemo:v1

Check results by navigating to http://localhost:9090/

Image for post
Image for post

And check functions with curl.

curl --get http://localhost:9090/api/Publisher?name=New%20Publisher

Stop all containers before proceeding further by running command in CMD.

FOR /f "tokens=*" %i IN ('docker ps -q') DO docker stop %i

Now we will proceed with Kubernetes setup and deployment.

First, we need to access our environment from command shell and with help of following commands and push container to container registry

https://gist.github.com/staslebedenko/c6375bb0e999edcc304ff35d5d468ff1

There are several options to install KEDA, with function tools, HELM or kubectl, here is the link. To speed up things we will do this from project directory with func command.

func kubernetes install — namespace keda

We will install KEDA, generate Kubernetes cluster manifest, adjust it and then deploy application.

https://gist.github.com/staslebedenko/db42075b68e86196823865b0af6bb53c

In a few minutes cluster will be up and ready to receive your requests, write them to the storage queue and process them with consumer function.

Image for post
Image for post
Docker push, KEDA installation and cluster YAML generation.

Its a good idea to double check generate k8_keda_demo.yml file and compare container images with those are published in container registry. In my case version were different :v1 referenced in YAML file and different one in registry.

spec:
selector:
matchLabels:
app: k82-cluster
template:
metadata:
labels:
app: k82-cluster
spec:
containers:
- name: k82-cluster
image: k82Registry.azurecr.io/kedafunctionsdemo:v1
env:
- name: AzureFunctionsJobHost__functions__0
value: Subscriber
envFrom:
- secretRef:
name: k82-cluster
serviceAccountName: k82-cluster-function-keys-identity-svc-act
---
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: k82-cluster
namespace: default
labels:
deploymentName: k82-cluster
spec:
scaleTargetRef:
deploymentName: k82-cluster
triggers:
- type: azure-queue
metadata:
type: queueTrigger
connection: AzureWebJobsStorage
queueName: k8queue
name: myQueueItem

After re-deployment everything went well

Image for post
Image for post
Pods status after deployment.

The first thing we need is a public IP address of a load balancer to access application with curl.

Image for post
Image for post
Public IP address of the cluster.
curl --get http://cluster-ip/api/Publisher?name=New%20Publisher

Then we can proceed with load testing with help of the loader.io and adjust cluster scalability settings if requests start to fail :).

Image for post
Image for post
Testing Azure Function with loader.io free tier.

There is no need write special version of your application for on-premises usage and Azure Functions can be used on Kubernetes if needed along with RabbitMQ, Kafka and Microsoft SQL Server.

Some components of migration is missing, but I will cover points below in the following articles, stay tuned :).

  • How to setup Minicube on Windows 10 and run everything locally.
  • How to replace Azure Storage Queue with RabbitMQ.
[RabbitMQTrigger("queue", ConnectionStringSetting = "RabbitMQConnection")] string inputMessage,
  • How to setup containerized Microsoft SQL Server.
  • How to secure cluster and how to store logs.

That’s it, thanks for reading. Cheers!

Microsoft Azure

Any language.

Stas(Stanislav) Lebedenko

Written by

Azure MVP | Senior dev @ SigmaSoftware | Odesa MS .NET/Azure group | Cloud solution architect | Azure Serverless fan 🙃| IT2School/AtomSpace volunteer #msugodua

Microsoft Azure

Any language. Any platform. Our team is focused on making the world more amazing for developers and IT operations communities with the best that Microsoft Azure can provide. If you want to contribute in this journey with us, contact us at medium@microsoft.com

Stas(Stanislav) Lebedenko

Written by

Azure MVP | Senior dev @ SigmaSoftware | Odesa MS .NET/Azure group | Cloud solution architect | Azure Serverless fan 🙃| IT2School/AtomSpace volunteer #msugodua

Microsoft Azure

Any language. Any platform. Our team is focused on making the world more amazing for developers and IT operations communities with the best that Microsoft Azure can provide. If you want to contribute in this journey with us, contact us at medium@microsoft.com

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store