Part 12 — HumanGov Application — Ansible-2: Playbooks, Variables, Conditionals, Loops, Roles, and Dynamic Inventory

Cansu Tekin
17 min readJan 30, 2024

--

The HumanGov Application is a Human Resources Management Cloud SaaS application for the Department of Education across all 50 states in the US. Check Part 11 to follow up the project series. In the first 10 part we focused on Terraform. Now, we are going to introduce Ansible and keep improving the application architecture with DevOps tools in the following sections.

In this following project series, we are going to transition the architecture from a traditional virtual machine architecture to a modern container-based architecture using Docker containers and Kubernetes running on AWS. In addition, we will also be responsible for automating the complete software delivery process using Pipelines CI/CD using AWS services such as AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy. Finally, we will learn how to monitor and observe the cloud environment in real-time using tools such as Prometheus, Grafana, and automate one-off cloud tasks using Python and the AWS SDK.

Ansible: Playbooks

A playbook is a collection of plays. A play is a set of instructions. Playbooks are written in YAML that contain a series of tasks to be executed sequentially on remote hosts. Playbooks can be used for configuration management, application deployment, and various automation tasks. Ansible, Docker Compose, Kubernetes, and many more use YAML files. YAML is a human-readable data serialization language, commonly used for configuration files. YAML mainly has key-value pairs, lists, and dictionaries structure. It is case-sensitive and uses 2 spaces for indentation.

# key:value pair
hosts:webservers

# List(ordered)
loop: # key
-package1 # value
-package2 # value
-package3 # value

# Dictionary (Unordered)
firewall_rules: # key
8080:tcp # key:value
22:tcp # key:value
Ansible Playbook example with one play

Ansible: Playbook

Step 1: Provisioning host01 & host02 using Terraform

We will keep using AWS Cloud9. Go to AWS Services and open Cloud9.

We will provision infrastructure using Terraform and perform some tasks on this infrastructure using Ansible. Go to our Cloud9 ansible-tasks folder from the previous project. Create a main.tf file. Check previous project series to learn more about the Terraform file. We will focus on Ansible here and not touch Terraform in detail.

main.tf

provider "aws" {
region = "us-east-1"
}

# We will use Red Hat Enterprise Linux AMI for both instances
resource "aws_instance" "host01" {
ami = "ami-023c11a32b0207432"
instance_type = "t2.micro"
key_name = "tcb-ansible-key"
vpc_security_group_ids = [aws_security_group.secgroup.id]

# Each time when we try to connect to the hosts with Ansible, we should confirm connection
# Instead of doing this each time we can go to known_hosts file and add our hosts here.
# We also give 30 second wait time before running the command. The instance need some
# time to be up and running. Otherwise, the private IP address will not be created.
# You can check running "tail ~/.ssh/known_hosts" command
provisioner "local-exec" {
command = "sleep 30; ssh-keyscan ${self.private_ip} >> ~/.ssh/known_hosts"
}
}

resource "aws_instance" "host02" {
ami = "ami-023c11a32b0207432"
instance_type = "t2.micro"
key_name = "tcb-ansible-key"
vpc_security_group_ids = [aws_security_group.secgroup.id]

provisioner "local-exec" {
command = "sleep 30; ssh-keyscan ${self.private_ip} >> ~/.ssh/known_hosts"
}
}

resource "aws_security_group" "secgroup" {

# To allow SSH connectivity
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # open connection only to the IP adresses you trust in your production environment
}
# To allow HTTP connectivity
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # open connection only to the IP adresses you trust in your production environment
}

# Allow connection from Cloud9 instance,
# go to Cloud9 security group and grap its ID
ingress {
from_port = 0
to_port = 0
protocol = "-1"
security_groups = ["<YOUR_SEC_GROUP_ID>"]
}

# To allow the trafic from the EC2 instance to Inthernet
# to connect to the Linux Repository to download the software
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

}
# We need private IP address' of the instances to establish Ansible connection
# to these instances
output "host01_private_ip" {
value = aws_instance.host01.private_ip
}

output "host02_private_ip" {
value = aws_instance.host02.private_ip
}

Here is how to find Cloud9 EC2 instance’ security group ID:

Run Terraform:

terraform init
terraform plan
terraform apply

Open our Ansible inventory file from Part 11 and update the hostname and host private IP addresses with the output of the terraform apply.

Step 2: Test ping communication and SSH connectivity from the Cloud9 control node to created hosts; host01 and host02

Step 3: Installing nginx via ad-hoc (RHEL Linux Distribution)

# Update and Install 'nginx':
ansible -i hosts all -m yum -a "update_cache=yes name=nginx state=latest" -b

# Check 'nginx' service status:
ansible -i hosts all -m shell -a "systemctl status nginx"

# Start 'nginx':
ansible -i hosts all -m shell -a "systemctl start nginx" -b

Step 4: Creating a Playbook file install-webserver.yml to install nginx

Let’s repeat the same task using the Ansible playbook. Ad-hoc commands are suitable for simple tasks, but playbooks are better for managing complex scenarios. First, we need to remove the nginx we installed in the previous step.

# Remove 'nginx':
ansible -i hosts all -m yum -a "name=nginx state=absent" -b

Inside of ansible-tasks folder create a file install-webserver.yml

- name: Installing & starting nginx
hosts: all
become: yes
tasks:
- name: Installing nginx
yum:
update_cache: yes
name: nginx
state: latest

- name: Starting nginx
shell: systemctl start nginx

- name: Enable the NGINX service during the boot process
service:
name: nginx
enabled: yes

You can find all this information inside Ansible documentation.

Run the Ansible Playbook to install the webserver.

ansible-playbook install-webserver.yml

Check the public IP addresses of the hosts to see if nginx is installed.

If you continue to hands-on later you can also stop instances right now and restart from the AWS dashboard later, or destroy the environment with Terraform

terraform destroy

Ansible: Variables

Variables are used to store and reference values that can be used throughout playbooks, templates, and other configuration files. We can define our variables in two places; the INI inventory file(key=value) and the YAML variable file (key: value). We can use Ansible predefined variables or we can create new ones.

Source: The Cloud Bootcamp Platform

The ansible_host, ansible_connection, and ansible_user are the predefined variables by Ansible while http_port and db_port are new.

We are already using variables in our host inventory file from the previous section.

The host inventory file:

host01 ansible_host=172.31.20.236 ansible_user=ec2-user
host02 ansible_host=172.31.31.243 ansible_user=ec2-user

# The variables defined for all servers; host01 and host02
[all:vars]
ansible_ssh_private_key_file=/home/ec2-user/environment/ansible-tasks/tcb-ansible-key.cer

[webservers]
host01

Step 1: Create a new Ansible playbook named variables-example.yml

Go to Cloud9 and create a new Ansible playbook named variables-example.yml under the ansible-task folder we used so far.

- name: Example Playbook
hosts: localhost # Instead of connecting remotely to EC2 instance host, Ansible will run locally on Cloud9 control node
vars: # Where we define variables
http_port: 80
https_port: 443

packages: # Defining variable as a list
- git
- mysql-client
- curl
- wget

appserver: # Defining variables as a dictionary
hostname: webapp01
ipaddress: 192.168.1.202
os: Windows Server 2019

tasks:
- name: Display the single variable
debug: # the module to display variable's values
var: http_port, https_port

- name: Display the list variable
debug:
var: packages

- name: Display the dictionary variable
debug:
var: appserver

Step 2: Run this Ansible playbook on the Cloud9 console

 ansible-playbook variables-example.yml 

Ansible: Conditionals

Conditionals allow you to control the flow of your playbooks based on certain conditions. As an example, we are going to install software on a couple of EC2 instances at the same time, however, each instance has a different operating system. We should consider specific settings for different operating systems and will use the Ansible conditionals to set these settings automatically.

We already have a Terraform infrastructure with two Red Hat EC2 instances. We are going to add one more EC2 instance to this infrastructure with a different operating system, Debian.

Step 1: Update the Terraform main.tf file to create a Debian EC2 instance

Provision the infrastructure using Terraform and update the inventory file with Private IP accordingly. Add the code below inside the main.tf file.

resource "aws_instance" "host03" {
ami = "ami-058bd2d568351da34" # may change, copy yours from AWS Debian Instance AMI
instance_type = "t2.micro"
key_name = "tcb-ansible-key"
vpc_security_group_ids = [aws_security_group.secgroup.id]

provisioner "local-exec" {
command = "sleep 30; ssh-keyscan ${self.private_ip} >> ~/.ssh/known_hosts"
}

}

output "host03_private_ip" {
value = aws_instance.host03.private_ip
}

Run Terraform

terraform apply -auto-approve

Go to the host inventory file and update private IP addresses and users with new ones as above.

Step 2: Check if the connection to the hosts is working

ansible all -m ping 

Useful Ansible Ad hoc commands for exploring Ansible Facts:

ansible host01 -m gather_facts # gather all information for host01
ansible host01 -m setup # gather all information for host01

ansible host01 -m setup -a "filter=ansible_distribution" # get the distribution information host01
ansible host02 -m setup -a "filter=ansible_distribution" # get the distribution information host02
ansible host03 -m setup -a "filter=ansible_distribution" # get the distribution information host03

ansible host01 -m setup -a "filter=ansible_python_version" # get the ansible python version host01
ansible host02 -m setup -a "filter=ansible_python_version" # get the ansible python version host02
ansible host03 -m setup -a "filter=ansible_python_version" # get the ansible python version host03

ansible all -m setup | grep -e 'ansible_os_family\\|ansible_python_version\\|ansible_pkg_mgr'

Step 3: Install Apache on different EC2 instances using conditionals

Let’s create a new playbook named install-webserver-conditional.yml to install Apache on different EC2 instances using conditionals.

- name: Installing Apache
hosts: all
become: yes # We nedd to be a root user to install software
tasks:
- name: Setup Apache - Debian # webserver for Debian
apt: # We use apt module to install packages for Debian
update_cache: yes
name: apache2
state: present
when: ansible_distribution == 'Debian' # Get values from ansible_distribution variable from Ansible Facts, only if equals Debian, installs Apache

- name: Setup Apache - RHEL
yum: # We use yum module to install packages for Red Hat
name: httpd # Apache Webserver for Red Hat
state: present
when: ansible_distribution == 'RedHat' # Get values from ansible_distribution variable from Ansible Facts, only if equals RedHat, installs Apache

Step 4: Run the Ansible playbook on Cloud9

ansible-playbook install-webserver-conditional.yml 

As you can see host01 and host02 are skipped but host03 changed for task Setup Apache — Debian. The host03 is skipped for the task Setup Apache — RHEL while host01 and host02 are changed by installing Apache based on specified conditionals in the playbook.

When we run the playbook again no change will happen because of the idempotency feature.

Most of the modules in Ansible have this feature except the shell and command module.

# We did not add any task in playbook to start webserver. So, first command will fail
ansible host01 -m shell -a "systemctl status httpd" -b
# Start the webserver for host01
ansible host01 -m shell -a "systemctl start httpd" -b
ansible host02 -m shell -a "systemctl start httpd" -b

As you can see when we use the shell module, the state has changed. If we run the shell start command again and again, its status will change every time even though Apache is already running. The idempotency feature is not valid for the shell module. Consider using the service module instead.

Step 5: Check one of the hosts if it is up and running

Go to AWS EC2 instances, pick one of the hosts, copy its public IP address, and paste it to your browser to see if it is up and running after you start using the shell module as above.

It is working!

Step 6: Create a sample index.html file to create a simple webpage

<!DOCTYPE html>
<html>
<head>
<style>
body {
background-color: #000000;
color: #ffffff;
font-family: Arial, sans-serif;
}

.container {
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
flex-direction: column;
text-align: center;
}

img {
max-width: 300px;
margin-bottom: 20px;
}
</style>
</head>
<body>
<div class="container">
<img src="https://i.pinimg.com/originals/80/b9/59/80b959e1089aecb9bf3376a777032265.jpg" alt="Your Image">
<h1>Congratulations!</h1>
<p>Keep up the good work!</p>
</div>
</body>
</html>

Copy this file to the host02.

ansible host02 -m copy -a "src=index.html dest=/var/www/html" -b

Refresh your browser.

Step 7: Remove the infrastructure if you are done and continue hands-on later

terraform destroy -auto-approve

Ansible: Loops

In Ansible, loops are used to iterate over a set of values, such as a list or dictionary, and perform tasks repeatedly.

Step 1: Create a new Ansible Playbook create-files-folders.yml

When we run the create-files-folders.yml playbook it will connect to each instance we created inside AWS and then create a folder and create a file inside this folder.

First, provision the infrastructure using Terraform again if you destroyed it in the previous section, and update the inventory file with Private IP accordingly.

Run the Ansible Playbook:

ansible-playbook create-files-folders.yml

Step 2: Adjust the playbook to create multiple folders using “with_items” or “loop” directives

- hosts: all
tasks:
- name: Creating Folder
file:
path: /home/{{ ansible_user }}/{{ item }}
state: directory
with_items: # or "loop:"
- folder01
- folder02
- folder03

We can create all folders using the “loop” directive instead of “with_items” as well.

Run the Ansible Playbook:

ansible-playbook create-files-folders.yml

It is connected to each host and created the files not already in there. As you remember we created one folder for each in the previous step.

Let’s go and check the created folders. Open a new terminal remotely connect to the host01 and list the content in this directory.

Cloud9 -> Window -> New Terminal

ansible host01 -m shell -a "ls /home/ec2-user/" 

Our folders; folder01, folder02, and folder03 are here.

We can delete folders by simply changing the state from directory to absent.

Make changes and run the Ansible playbook again, then check host01 if the folders are removed.

ansible-playbook create-files-folders.yml

Step 3: Create a file inside each folder

- hosts: all
tasks:
- name: Creating Folder
file:
path: /home/{{ ansible_user }}/{{ item }}
state: directory #state: absent
loop:
- folder01
- folder02
- folder03

- name: Creating files
file:
path: /home/{{ ansible_user }}/{{ item.dir }}/{{ item.file }}
state: touch #state: absent
with_items:
- { dir: "folder01", file: "file01"}
- { dir: "folder02", file: "file02"}
- { dir: "folder03", file: "file03"}

Run the Ansible Playbook:

ansible-playbook create-files-folders.yml

Check the other terminal where you connected to host01 if the file is created in it.

 ansible host01 -m shell -a "ls /home/ec2-user/folder01/" 

Make both states in the playbook absent and run the Ansible playbook to remove all files and folders you created.

Step 4: Destroy the environment with Terraform if you are done and continue hands-on later

terraform destroy -auto-approve

Ansible: Roles

Ansible roles, similar to modules in Terraform, are a way to organize and structure your playbooks, making them more modular, reusable, and sharable across different projects. Roles are pieces of organized configuration for tasks, variables, and handlers that we can call from the main playbook. The basic components of Ansible roles are:

Defaults: Includes default values for the variables.

Files: Configuration files, scripts, or any other files needed by your application or system, that should be transferred to the host.

Handlers: Tasks that will be triggered by other tasks like restarting a service or reloading configurations.

Meta: Manages role metadata such as dependencies.

Tasks: The main list of tasks to be run by the role.

Templates: Used to generate configuration files dynamically using Jinja2 templating language.

Vars: Contains variables that will be used in the role. The variable’s default values determined in the Defaults directory can be overwritten here if the same variable is defined in both directory files.

If you do not create a directory for the roles, its default directory is /etc/ansible/roles. You can modify its directory inside /etc/ansible/ansible.cfg. You can read more about roles from Ansible documentation.

Step 1: Provision the infrastructure using Terraform and update the inventory file with Private IP accordingly

Open the AWS Cloud9. If you destroyed infrastructure, provision it again and update Private IPs as we did before and also update the webservers group as below.

We have a webservers group with host01 and host02. We are going to create and configure a role with the all required files to perform the webserver configuration task. We can use this role whenever we need to perform the same task.

Step 1: First, check if everything is working

ansible webservers -m ping

Step 2: Create a new Ansible Role

We basically can create a role directory manually under our ansible-tasks directory. Additionally, we can use a template that comes with ansible-galaxy that will create a roles directory and then initialize a role inside it. Inside the roles folder, we will create a webserver folder that will build the whole role directory structure we discussed before.


ansible-galaxy role init roles/webserver

As you can see, it created a roles folder with the role structure in it. Files inside folders are mostly empty. We will update this role based on our preferences.

Step 3: Update the files folder by uploading the image devops.png to the roles/webserver/files folder

We are going to create a webpage and this page will use the image we uploaded to the files directory.

We will use this image, if you want you can use something else. Go to the files directory and upload the image from your local directory.

Step 4: Update the file roles/webserver/handlers/main.yml and the file roles/webserver/defaults/main.yml

We can execute a handler whenever a state change occurs after completing a task. We will define a handler that will be called to restart Apache whenever we make an update inside the index.html file that we will use for our webpage. We will create the Restarting Apache handler and then call it inside our main Ansible file.

We also will create a cloud_provider variable and set the default provider value to AWS in roles/webserver/defaults/main.yml.

Step 5: Create a template file index.html.j2 under roles/webserver/templates

Right-click on the template folder and create an index.html.j2 file in it. With the .j2 extension, Ansible accepts it as a template file.

This role will deploy a webserver. We will push an index.html file to the webserver to see it as a webpage. Some of the content on this webpage will be dynamic. This template will use the role variables we defined in the previous step to make it dynamic.

<!DOCTYPE html>
<html>
<head>
<style>
body {
background-color: #000;
color: #fff;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
margin: 0;
font-family: Arial, sans-serif;
}

.content {
text-align: center;
}
</style>
</head>
<body>
<div class="content">
<b> Hello!</b>

<h1> This is a webserver running in an EC2 Instance: {{ ansible_hostname }}</h1>
<h4> Powered by {{ cloud_provider }}</h4>

<img src="devops.png" alt="DevOps Image">
</div>
</body>
</html>

{{ ansible_hostname}} grabs the value of the ansible_hostname variable from ansible facts.

{{ cloud_provider}} grabs value of the cloud_provider variable from the file roles/webserver/defaults/main.yml we defined before.

Finally, this template will use the devops.png image we uploaded inside the file directory.

Step 6: Update the file roles/webserver/tasks/main.yml

These are the tasks to be performed by role. We will define them as we did with the playbook before. You already know the Installing Apache and the Starting Apache tasks.

- name: Installing Apache
yum:
name: httpd
state: present

- name: Starting Apache
service:
name: httpd
state: started
enabled: true

- name: Copying files
copy: src=devops.png dest=/var/www/html/ # Root directory of the Apache webserver

- name: Generating Template
template:
src: index.html.j2
dest: /var/www/html/index.html #This template will generate a index.html file
notify:
- Restarting Apache

In the Copying files task, we will copy our image inside the role files directory to the root directory of the Apache webserver.

The Generating Template task will generate an index.html file in a defined destination, the Apache root directory, using the index.html.j2 template. While generating this file, it will replace the dynamic variables (ansible_hostname and cloud_provider) in index.html.j2 inside the template directory with their defined values. After doing that, we need to restart Apache. Instead of determining a new task, we can simply use a handler here because we want to restart the Apache only when changes are applied to the index.html file. Go to the handler/main.yml file and copy the name of the handle we defined before for this purpose.

Step 7: Create a main playbook setup-webserver-roles.yml to execute the role

We should create a playbook outside the roles folder. Create a file named setup-webserver-roles.yml inside ansible-tasks directory.

- name: Setting up Apache as Webserver
hosts: webservers
become: true
roles:
- webserver

Step 8: Run the Ansible playbook

ansible-playbook setup-webserver-roles.yml

Step 9: Test the website access using the host02 public IP address

Well Done!

Step 9: Destroy the infrastructure using Terraform

cd ansible-tasks
terraform destroy -auto-approve

CONGRATULATIONS!

--

--

Cansu Tekin

AWS Community Builder | Full Stack Java Developer | DevOps | AWS | Microsoft Azure | Google Cloud | Docker | Kubernetes | Ansible | Terraform