Lifting your serverless app with RabbitMQ , Kubernetes and Azure Functions. Part 2.

Deploying RabbitMQ to Kubernetes, configuring KEDA and switching functions triggers to the new queue.

Stas(Stanislav) Lebedenko
Microsoft Azure
7 min readOct 30, 2020

--

So here is part two, the first article is here, and the GitHub repo is here.

Why is RabbitMQ and not Kafka? Because the latter is a distributed streaming platform, while the former is a message broker and more comfortable to start without extra effort.

TL;DR; In this article, I will share steps to setup RabbitMQ in the existing Kubernetes cluster with deployed Azure Functions and KEDA. Then provide steps to set up a new queue, update load balancer, update the Keda configuration to scale function according to RabbitMQ message count.

Before we begin, Azure Functions are great with the Consumption and Premium plan in the cloud, there is no need for K8s in 90% of situations, and this article is about that ten percent :).

Introduction

Migrating existing solutions from the cloud is not a very pleasant thing usually, but thankfully Microsoft provides tools and frameworks to make developer life easier. As I mentioned before, this particular guide is based on a customer client request that needs special compliance and an entire solution hosted in the private data center.

The action plan is pretty simple.

  • Follow the steps from the first article.
  • Install RabbitMQ in the Kubernetes cluster.
  • Setup reverse proxy(NGINX) or load balancer for RabbitMQ.
  • Set up public access to the RabbitMQ management console.
  • Create a new RabbitMQ queue.
  • Update Azure Functions extension configuration.
  • Switch Azure Function bindings from Storage Queue to RabbitMQ queue.
  • Test the updated solution locally.
  • Create a new container version and deploy it to Kubernetes.
  • Update the KEDA configuration to handle the event-driven scale of RabbitMQ.
  • Deploy Kubernetes and infrastructure via Azure CLI script.

RabbitMQ setup

Let's install helm

Then proceed with connection and create a namespace for RabbitMQ

Deploy the RabbitMQ with default credentials.

Or use alternative command that will set provided credentials.

Observe the results with the command below.

Now let's get credentials from our instance, the default user name is user, the password can be obtained via the following command in cmd and convert it from base64 format here, to avoid additional Powershell scripts :).

We will use the credentials to access the RabbitMQ management console and create a new queue below.

We can also open cluster configuration via Azure portal and observe results, now we have a RabbitMQ along with KEDA, but we should create a queue there, so we can put messages.

A

Setup access and create a queue.

For the local Kubernetes cluster, you should install and configure NGINX.

Proceed to the cluster configuration blade and open “my-rabbitmq” section.

And make the following changes to the YAML file, by changing the type ClusterIP to LoadBalancer - this will create a new public IP address along with new rules in the existing load balancer.

Change YAML and Save.

The YAML update will result in the new RabbitMQ nodePort settings along with several additional settings, like ExternalTrafficPolicy parameter and the new public IP address.

This YAML update will generate new load balancing rules for the selected public IP address and RabbitMQ ports. If you want to disable or restrict access to your queues - just delete balancing rules or limit access to your IP address only via Network Security Group.

Be aware, that if you try to assign RabbitMQ to the existing/used public IP address - this will result in an update error (accessible via the event tab).

If you need a proper separation, there is an option to create a new backend with the different VMSS and IP configuration, then associate it with virtual machines by upgrading them via the VMSS Portal configuration blade.

After all manifest updates, the RabbitMQ service should have dedicated IP.

Queue configuration

Log in to the management console via a new IP address, port 15672 and user and password created earlier.

Create a new queue for Azure Functions.

Now, let's form a connection string for RabbitMQ queue, we can obtain the internal and publish IP address from K8s Portal Services and Ingresses page. The public IP address can be used for development purposes from the local machine.

Rabbit MQ service configuration overview.

Application update

We will start with adding RabbitMQ extension to the KedaFunctionsDemo.csproj, VS Code will ask you to make package restore.

Then proceed with an output trigger change for the Publisher function.

And adding the new input trigger for the Subscriber function, we will keep output to the storage queue as is, to observe that messages are running across our pipeline.

There is also a configuration for a poison queue in RabbitMQ trigger
[RabbitMQTrigger(“k8queue”, ConnectionStringSetting = “RabbitMQConnection”, DeadLetterExchangeName = “k8queue-poison”)] string myQueueItem

We also need to update local.settings.json with a public connection string, so we can test the pipeline locally.

And test the application locally with start command in VSCode terminal and curl command in CMD.

And double-check results in the RabbitMQ management page and Azure Storage Queue.

Finally, we need to build a new container version, run and test it locally.

Add the connection string to the curl below.

Kubernetes manifest update

First, we need to create a copy of k8_keda_demo.yml manifest and name it k8_keda_rabbit.yml

Alternatively, you can just generate a new one with functions dry run option

Now we need to update the container version to V2 and update the KEDA scaler object to work with RabbitMQ. and encode connection string as Base64

Update the part of the manifest with KEDA configured to RabbitMQ.

Deployment

We need to deploy the container to container registry

Update the cluster manifest and deploy the container with Azure Functions.

Then use a curl to test and observe results via RabbitMQ console and find processed messages in Storage Queue.

The summary

The goal was to create a solution that is easy to reproduce and test, going to production might present additional challenges with configuration, security and services separation in Kubernetes.

But the main point is that it's super easy to do, thanks to the KEDA and Microsoft.

Still, I recommend using the Kafka message broker because the RabbitMQ function trigger still doesn't reach general availability(I hope it will in a few months). Azure Functions RabbitMQ binging is still the beta, working well.

Lift of the Azure SQL to an on-premise Linux container is the next step.

--

--

Stas(Stanislav) Lebedenko
Microsoft Azure

Azure MVP | MCT | Software/Cloud Architect | Dev | https://github.com/staslebedenko | Odesa MS .NET/Azure group | Serverless fan 🙃| IT2School/AtomSpace