Kubernetes pod autoscaling in response to the change in the RabbitMQ queue.
In this article, I will show you how to scale your Kubernetes deployment in response to change in a load of your RabbitMQ queue. RabbitMQ is an open-source Queuing Message Broker Software.
When there is a need to setup autoscaling in the kubernetes cluster, in response to CPU and Memory utilization, it is pretty easy to set up as both metrics are supported metric by Horizontal Pod Autoscaler(HPA). HPA automatically scale up and down based on the observed metric. But the problem is HPA doesn’t do scaling directly on a custom metric. So, How can we scale on custom metric?
How we will achieve HPA for RabbitMQ Metric?
- A deployment say(k8-rabbit-pod-autoscaler) will run in your kubernetes cluster which will be watching the given RabbitMQ queue.
- k8-rabbit-pod-autoscaler will be responsible for scale in and scale out of the desired deployment.
- It provides flexibility to set an interval of polling RabbitMQ queue and decides the number of message per pod.
I have used https://github.com/onfido/k8s-rabbit-pod-autoscaler for the config. I have modified autoscale.sh script to accept custom virtual host. You can check forked updated GitHub repo https://github.com/prasvats/k8s-rabbit-pod-autoscaler
You need to create a docker image with given Dockerfile and use deploy.yaml to create a service account and deployment.
Here we have created a deployment role with required deployment access of your k8 cluster. This cluster role is bind to deployment so that my k8-rabbit-pod-autoscaler should have required permission to perform scaling action on required deployment.
Questions and Comments are welcome. Please Clap and Share, if this helped you.
A DevOps Guy