Automating Kubernetes Deployments Across Multiple Clusters Using Ansible

Vinisha Kurapati
3 min readSep 29, 2024

--

Managing Kubernetes deployments across multiple clusters can be a tedious task, especially when you have to update the configuration files, change image tags, and apply the new configurations manually. What if you could automate the whole process for multiple clusters in one go? In this article, I’ll walk you through how to use Ansible to automate the process of updating Kubernetes deployments across two clusters.

We’ll create an Ansible playbook that will:

• Update deployment.yaml files.

• Replace image tags and update version labels.

• Apply the updated configurations to each cluster with kubectl.

Why Use Ansible for Kubernetes?

Ansible is a simple yet powerful tool for automating configuration management, application deployment, and task automation. It’s especially helpful when managing multiple systems or clusters as it allows you to automate the same tasks across different environments in a repeatable and scalable manner.

Scenario Overview

Imagine you have two Kubernetes clusters, each with a master node:

• Cluster 1 has its master node at 192.0.0.1

• Cluster 2 has its master node at 192.0.0.2

Both clusters run multiple microservices, each with a deployment.yaml file that defines the image tag, app version, and other configurations. Manually updating each microservice’s configuration in both clusters would be inefficient, especially if you are handling multiple services across environments.

Instead, we will use Ansible to:

• SSH into each cluster’s master node.

• Update the deployment.yaml files for services like sample1 and sample2.

• Change the image tags and update the app.kubernetes.io/version label.

• Apply the changes to the Kubernetes cluster using kubectl.

Prerequisites

  • Ansible Installed: Ensure Ansible is installed on your local machine. You can install it via pip:
pip install ansible

• Passwordless SSH Access: Ensure you can SSH into both master nodes without needing a password. You can set this up by copying your SSH key to the remote hosts using ssh-copy-id:

ssh-copy-id user@192.0.0.1

ssh-copy-id user@192.0.0.2
  • Kubernetes and Kubectl configured on the master nodes.

Step 1: Define the Inventory File

We first need to create an inventory file that specifies the two clusters. This file tells Ansible where the master nodes are and how to connect to them.

Create a file named inventory.ini:

[cluster1]
master1 ansible_host=192.0.0.1 ansible_user=ubuntu

[cluster2]
master2 ansible_host=192.0.0.2 ansible_user=ubuntu

• master1 and master2 are the aliases for the master nodes.

• ansible_host is the IP address of the master node.

• ansible_user is the SSH user that will be used to connect to the node.

Step 2: Write the Ansible Playbook

Next, we create the Ansible playbook that will handle the actual updates. This playbook will:

1. Copy the local deployment.yaml files to the appropriate directories on the master nodes.

2. Update the image tags and app.kubernetes.io/version labels inside the deployment.yaml files.

3. Apply the updated deployment files using kubectl.

Here’s the playbook (update_deployments.yml):

---

- name: Update Kubernetes Deployments on Two Clusters

hosts: all

become: yes

tasks:

- name: Copy the local deployment files to the master nodes

copy:

src: /path/to/local/{{ item }}/deployment.yaml

dest: /home/{{ ansible_user }}/{{ item }}/deployment.yaml

mode: 0644

with_items:

- application-1

- application-2

tags:

- copy



- name: Update image and app.kubernetes.io/version in the deployment.yaml files

lineinfile:

path: "/home/{{ ansible_user }}/{{ item }}/deployment.yaml"

regexp: '^(image: .+):.+$'

line: '\1:{{ services[item] }}'

backrefs: yes

with_items:

- application-1

- application-2

vars:

services:

application-1: "1.0.2"

application-2: "1.0.2"

tags:

- update



- name: Update app.kubernetes.io/version label in the deployment.yaml files

lineinfile:

path: "/home/{{ ansible_user }}/{{ item }}/deployment.yaml"

regexp: '^(app.kubernetes.io/version: ).+$'

line: '\1"{{ services[item] }}"'

with_items:

- application-1

- application-2

vars:

services:

application-11: "1.0.2"

application-2: "1.0.3"

tags:

- update


- name: Apply Kubernetes deployments

command: "kubectl apply -f /home/{{ ansible_user }}/{{ item }}/deployment.yaml"

with_items:

- application-1

- application-2

tags:

- apply

. Run the Ansible Playbook:

After defining the inventory file and the playbook, run the playbook using the following command:

ansible-playbook -i inventory.ini update_deployments.yml

How It Works

• SSH into the master nodes: Ansible connects to both master1 and master2 using the SSH user (ubuntu in this case) and the provided IP addresses.

• Update deployment files: The playbook copies the deployment.yaml files to the master nodes, updates the image tags and app.kubernetes.io/version labels to the new versions, and finally applies the updated deployments using kubectl apply.

Conclusion

By automating the process with Ansible, you can now manage multiple Kubernetes clusters in a single go, ensuring consistency and reducing the possibility of human errors. This approach can be extended to more clusters or services by simply adjusting the inventory and playbook configurations.

--

--