Lifting your serverless app with RabbitMQ , Kubernetes and Azure Functions. Part 2.
Deploying RabbitMQ to Kubernetes, configuring KEDA and switching functions triggers to the new queue.
So here is part two, the first article is here, and the GitHub repo is here.
Why is RabbitMQ and not Kafka? Because the latter is a distributed streaming platform, while the former is a message broker and more comfortable to start without extra effort.
TL;DR; In this article, I will share steps to setup RabbitMQ in the existing Kubernetes cluster with deployed Azure Functions and KEDA. Then provide steps to set up a new queue, update load balancer, update the Keda configuration to scale function according to RabbitMQ message count.
Before we begin, Azure Functions are great with the Consumption and Premium plan in the cloud, there is no need for K8s in 90% of situations, and this article is about that ten percent :).
Introduction
Migrating existing solutions from the cloud is not a very pleasant thing usually, but thankfully Microsoft provides tools and frameworks to make developer life easier. As I mentioned before, this particular guide is based on a customer client request that needs special compliance and an entire solution hosted in the private data center.
The action plan is pretty simple.
- Follow the steps from the first article.
- Install RabbitMQ in the Kubernetes cluster.
- Setup reverse proxy(NGINX) or load balancer for RabbitMQ.
- Set up public access to the RabbitMQ management console.
- Create a new RabbitMQ queue.
- Update Azure Functions extension configuration.
- Switch Azure Function bindings from Storage Queue to RabbitMQ queue.
- Test the updated solution locally.
- Create a new container version and deploy it to Kubernetes.
- Update the KEDA configuration to handle the event-driven scale of RabbitMQ.
- Deploy Kubernetes and infrastructure via Azure CLI script.
RabbitMQ setup
Let's install helm
choco install kubernetes-helm
choco install base64
Then proceed with connection and create a namespace for RabbitMQ
az login
az account set --subscription 07b49539-95cdbc61df8c
az account showaz acr login --name k82registry
az aks get-credentials --resource-group k82-cluster --name k82-cluster --overwrite-existingkubectl config use-context k82-clusterkubectl get podskubectl create namespace k8rabbit
Deploy the RabbitMQ with default credentials.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm search repo bitnami
helm install my-rabbitmq bitnami/rabbitmq --namespace k8rabbit
Or use alternative command that will set provided credentials.
helm install --name my-rabbitmq --set rabbitmq.username=user,rabbitmq.password=PASSWORD,service.type=LoadBalancer bitnami/rabbitmq
Observe the results with the command below.
kubectl get deployments,pods,services --namespace k8rabbit
Now let's get credentials from our instance, the default user name is user, the password can be obtained via the following command in cmd and convert it from base64 format here, to avoid additional Powershell scripts :).
kubectl get secret --namespace k8rabbit my-rabbitmq -o jsonpath="{.data.rabbitmq-password}"
We will use the credentials to access the RabbitMQ management console and create a new queue below.
We can also open cluster configuration via Azure portal and observe results, now we have a RabbitMQ along with KEDA, but we should create a queue there, so we can put messages.
Setup access and create a queue.
For the local Kubernetes cluster, you should install and configure NGINX.
Proceed to the cluster configuration blade and open “my-rabbitmq” section.
And make the following changes to the YAML file, by changing the type ClusterIP to LoadBalancer - this will create a new public IP address along with new rules in the existing load balancer.
The YAML update will result in the new RabbitMQ nodePort settings along with several additional settings, like ExternalTrafficPolicy parameter and the new public IP address.
This YAML update will generate new load balancing rules for the selected public IP address and RabbitMQ ports. If you want to disable or restrict access to your queues - just delete balancing rules or limit access to your IP address only via Network Security Group.
Be aware, that if you try to assign RabbitMQ to the existing/used public IP address - this will result in an update error (accessible via the event tab).
If you need a proper separation, there is an option to create a new backend with the different VMSS and IP configuration, then associate it with virtual machines by upgrading them via the VMSS Portal configuration blade.
After all manifest updates, the RabbitMQ service should have dedicated IP.
Queue configuration
Log in to the management console via a new IP address, port 15672 and user and password created earlier.
Create a new queue for Azure Functions.
Now, let's form a connection string for RabbitMQ queue, we can obtain the internal and publish IP address from K8s Portal Services and Ingresses page. The public IP address can be used for development purposes from the local machine.
default format - amqp://user:password@url:port
internal format - amqp://user:password@10.240.0.71:5672
public format - amqp://user:password@20.67.128.53:5672
Application update
We will start with adding RabbitMQ extension to the KedaFunctionsDemo.csproj, VS Code will ask you to make package restore.
<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.RabbitMQ" Version="0.2.2029-beta" />
Then proceed with an output trigger change for the Publisher function.
[RabbitMQ(QueueName = "k8queue", ConnectionStringSetting = "RabbitMQConnection")] string message
And adding the new input trigger for the Subscriber function, we will keep output to the storage queue as is, to observe that messages are running across our pipeline.
[RabbitMQTrigger("k8queue", ConnectionStringSetting = "RabbitMQConnection")] out string myQueueItem
There is also a configuration for a poison queue in RabbitMQ trigger
[RabbitMQTrigger(“k8queue”, ConnectionStringSetting = “RabbitMQConnection”, DeadLetterExchangeName = “k8queue-poison”)] string myQueueItem
We also need to update local.settings.json with a public connection string, so we can test the pipeline locally.
"RabbitMQConnection": "amqp://user:password@20.67.128.53:5672",
And test the application locally with start command in VSCode terminal and curl command in CMD.
func start --build --verbose
curl --get http://localhost:7071/api/Publisher?name=New%20Publisher
And double-check results in the RabbitMQ management page and Azure Storage Queue.
Finally, we need to build a new container version, run and test it locally.
Add the connection string to the curl below.
docker build -t k82Registry.azurecr.io/kedafunctionsdemo:v2 .
docker run -p -e docker run -p 9090:80 -e AzureWebJobsStorage={storage string without quotes} k82egistry.azurecr.io/kedafunctionsdemo:v1curl --get http://localhost:7071/api/Publisher?name=New%20Publisher
Kubernetes manifest update
First, we need to create a copy of k8_keda_demo.yml manifest and name it k8_keda_rabbit.yml
Alternatively, you can just generate a new one with functions dry run option
func kubernetes deploy --name k82-cluster --image-name "k82Registry.azurecr.io/kedafunctionsdemo:v2" --dry-run > k8_keda_demo.yml
Now we need to update the container version to V2 and update the KEDA scaler object to work with RabbitMQ. and encode connection string as Base64
Update the part of the manifest with KEDA configured to RabbitMQ.
spec:
selector:
matchLabels:
app: k82-cluster
template:
metadata:
labels:
app: k82-cluster
spec:
containers:
- name: k82-cluster
image: k82Registry.azurecr.io/kedafunctionsdemo:v2
env:
- name: AzureFunctionsJobHost__functions__0
value: Subscriber
envFrom:
- secretRef:
name: k82-cluster
serviceAccountName: k82-cluster-function-keys-identity-svc-act
---
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: k82-cluster
namespace: default
labels:
deploymentName: k82-cluster
spec:
scaleTargetRef:
deploymentName: k82-cluster
pollingInterval: 20
cooldownPeriod: 60
minReplicaCount: 0
maxReplicaCount: 10
triggers:
- type: rabbitmq
metadata:
type: rabbitMQTrigger
host: RabbitMqConnection
queueName: k8queue
name: myQueueItem
Deployment
We need to deploy the container to container registry
Update the cluster manifest and deploy the container with Azure Functions.
Then use a curl to test and observe results via RabbitMQ console and find processed messages in Storage Queue.
The summary
The goal was to create a solution that is easy to reproduce and test, going to production might present additional challenges with configuration, security and services separation in Kubernetes.
But the main point is that it's super easy to do, thanks to the KEDA and Microsoft.
Still, I recommend using the Kafka message broker because the RabbitMQ function trigger still doesn't reach general availability(I hope it will in a few months). Azure Functions RabbitMQ binging is still the beta, working well.
Lift of the Azure SQL to an on-premise Linux container is the next step.