Configuration and Change Management with Packer and Ansible Deploying to AWS.

Ssemaganda Victor
12 min readMay 18, 2018

--

In this article we are going to look at Ansible and Packer and how these tools are used to automate the application of configuration states aka configuration and change management.
At the very core of DevOps is configuration. Through configuration, we are able to organise and link all the resources that are used to do DevOps work but what is it really?

Configuration Management(CM) can be defined as the process of systematically handling changes to a system in a way that it maintains consistency among your assets in that system over time. The purpose of it is to identify and track individual configurations items, documenting their functional capabilities as well as their interdependencies and figure out how best to exploit them.

Change and configuration management are complementary. Change management’s primary goal is to control and ensure that only authorised changes are allowed to be made to a configuration item so as to mitigate the risk and impact on the resources as a result of the new changes. Using tools we are able to automate a lot of processes which saves a lot of time and greatly lowers the opportunity for human error given the reduced direct interaction with the systems from your engineers.

Integrating Change Management into your processes involves the following steps:

  • All changes must be approved through a single avenue. This ensures that before changes are made, through the avenue, you are able to take into consideration what other changes have been made and how they connect with the new changes and what effects the changes might have on the entire ecosystem.
  • As a team, Developers and Operations engineers, there needs to be a level of transparency regarding each others work. If one arm doesn’t know what the other is doing and yet they are working on the same project, they are bound to be conflicts. Knowing what each team is working with allows for them to reach a common/working ground.
  • Categorise all the possible changes that your organisation makes in terms of required resources, level of urgency, effect on the ecosystem and so forth. This allows you to systematically plan how you will implement them.
  • You must come up with policies and strategies on how to handle the changes. This gives structure to the entire process and minimises the room for error.
  • With all the above established, implementation of these changes using various provisioning tools like Terraform, AWS cloud formation and Packer and Configuration Management tools such as Chef, Puppet, SaltStack and Ansible. We shall be looking more into Packer and Ansible as I use them to deploy a Python/Flask API.

When figuring out configuration management system for your company, a few things have to be taken into consideration.

  • Configuration management planning: Configuration management must be planned in order to be effective, predictable and repeatable. The plan should detail any specific steps and the extent of their application during the life cycle. It should also identify roles and responsibilities for carrying out configuration management.
  • Configuration identification: Break down the work into smaller manageable tasks (configuration items), creating a unique numbering or referencing system and establishing configuration baselines;
  • Configuration control: This ensures that all changes to configuration items are documented. This allows you to figure out how the configuration items are connected.
  • Configuration status accounting: Keeping a record allows you to know the the current status of a configuration and to be able to keep track of configuration items throughout their development and operation
  • Configuration verification and audit: A way to confirm that the configurations made meet the requirements it was meant to fulfill.

Benefits of Configuration Management

  • Through configuration management you can greatly improve an organisation’s ability to identify what needs to be modified to accomplish a change and the consequences of changes.
  • It enables faster recovery when outages happen through the use of tools and the established structures.
  • There is improved service delivery and higher customer satisfaction as a rule of the faster delivery of solutions, improvements and recovery.
  • There are potentially lower operational costs due to a better understanding of the total cost of the current IT service model.
  • Improved security as a result of the change authorisation process so changes that could add security flaws can easily be identified and unauthorised changes are also easy to detect.
  • Due to the tracking of all changes, it’s easier to observe the results of a change and in effect enabling more informed business decisions.

Picking configuration management tools

Picking the best set of tools for your team and organisation is integral to the success of adopting a configuration management system. There are many tools on the market each with a different set of features and different complexity levels. When choosing a tool, these are some of the things to take into consideration;

  • Your needs
    It is important to have a comprehensive understanding of the problem you want to solve with the tools that way your search is for the problem solving tool not just any CM tool.
  • Inclusivity
    The tools should be inclusive and enable full collaboration between your teams as well as enable them to use their own tools should the need arise.
  • Infrastructure complexity
    Depending on the size of the project or organisation, the level of infrastructure complexity will vary, however it is important to take into consideration aspects like scalability and security, which may not be enforced by the tool.
  • Learning Curve
    The speed at which your team takes to get comfortable using the tools will depend on the infrastructure requirements as well the tools custom syntax. How long it takes to see a return on investment is ultimately dependent on how long the team takes to get comfortable with the tools.
  • Cost
    You can only acquire what you can afford. Cost in terms of money to pay for subscriptions as well as time spent to train your team or the cost of hiring people that already possess the necessary knowledge and skills.

Packer and Ansible together

We are going to use Packer as a provisioner and Ansible for configuration management to deploy a Python/Flask API to AWS to create an Amazon Machine Image (AMI) as mentioned earlier.

Packer and other provisioning software make it easier to ensure that you are running the same software from development to production without worrying about changes sneaking in.

To get started install Packer and Ansible on your computer. I used Homebrew to install it on a Macbook, depending on the OS you’re working with, the way to do this might vary.
Run these commands to install:
brew install ansible
brew install packer

Next we will create the packer template.

Packer template — This is a json file that defines one or more builds by configuring the various components of Packer. Packer reads the template and uses those settings to create multiple machine images in parallel. In our case we will only be configuring one image so we will not get to see the creation in paralle implemented. Perhaps I’ll come back to this article and include it when I’m not caught for time.

The template

{
"variables": {
"access_key": "{{env `aws_access_key`}}",
"secret_key": "{{env `aws_secret_key`}}",
"ami_id": "{{env `ami_id`}}",
"region": "{{env `region`}}"
},
"builders": [
{
"type": "amazon-ebs",
"region": "{{user `region`}}"
"access_key": "{{user `access_key`}}",
"secret_key": "{{user `secret_key`}}",
"source_ami": "{{user `ami_id`}}",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "cp3-17-{{isotime | clean_ami_name}}",
"ami_description": "CP3 ansible packer AMI with Ubuntu 16.04 instance",
"tags": {
"role": "python-api-17-12-17"
},
"run_tags": {
"role": "buildSystem"
}
}
],
"provisioners": [
{
"type": "ansible",
"playbook_file": "./pacsible-playbook.yml"
}
]
}

variables: This is where you define custom user variables, in this case the ami_id, the region and Amazon Web Services(AWS) access key and secret key. The AWS keys can be found on AWS under IAM users Security Credentials. Add them to your environment variables and they will be picked up by packer. The ami_id is the id of the source_ami and the region is the region in which the instance should be created. These will be passed to the template when running the build using the var parameter.
NOTE: Do not explicitly add your AWS credentials in the packer file if you plan to share it anywhere. These credentials can be used to spin up images on your account as you will soon see so sharing them publicly is a recipe for disaster.

builders: This controls what base system Packer builds on and follows the settings in this section to create the machine image.
In our builder we have:

  • type which tells packer to create Amazon AMIs backed by EBS volumes for use in EC2.
  • region The name of the region to launch the EC2 instance to create the AMI.
  • access key and secret key previously mentioned in the variables section which are used to communicate with AWS.
  • source_ami is a pre-existing image which is used as the base for the image we are going to create.
  • instance_type is the EC2 instance type to use while building the AMI.
  • ami_name is a name to identify the Image. We use the isotime variable pick up the time at the point of creating the image and append it to the name to ensure that the name is unique and the clean_ami_name variable to remove any characters from the time that are not allowed in the name.
  • tags applied to the AMI
  • run_tags applied to the instances launched to create the image for easy identification.

provisioners: In this section you can have an array of all the components that install and configure software within a running machine before it is turned into a static image. They ensure that the image has all the necessary software. In our provisioners section we only have one.
Each provisioners definition is in form of a json object with keys used to configure the provisioner. The type key is expected and specifies the name of the provisioner to use, in our case this is Ansible. The playbook_file tells packer which file is to be executed by Ansible.

The Ansible playbook
The Ansible playbook is a YAML file.

---
- hosts: all
become: yes
vars:
ansible_python_interpreter: "/usr/bin/python3"

roles:
- setup
- app
- webserver
- service
- start

The 3 hypens at the top is yaml syntax to indicate the start of a document.

hosts: all Tells ansible to gather facts for all hosts in the inventory.
become — This is a flag to enable privilege escalation. It allows you ‘become’ another user, different from the user that logged into the machine (remote user). The default privilege escalation user is root.

vars — This refers to variables. It is how differences between systems are handled in ansible.
ansible_python_interpreter — This points ansible to the python executable on your local machine which it should used to invoke the ansible command line tool.
roles — Roles are ways of automatically loading certain vars_files, tasks, and handlers based on a known file structure.
In our configurations we have five roles, setup, app, webserver, service and start. The roles must be listed in the order in which they should be executed.

Roles

For roles to work you must follow a particular directory structure and they expect files to be in certain directory names.

roles
|-folder-name-of-role
|--folder-name-of-content
|---main.yml
playbook.yml

In the above structure we have the roles folder, inside the roles folder we have a folder named after the name of the role, this is up to you but ideally it should be descriptive of what the role is implementing and it should be the name you add under the roles section in the playbook yaml file. In that folder you have folder named after the content. This is limited to a ansible predefined set of directory names which include, tasks, handlers, defaults, vars, files, templates and meta. For this implementation we will only be dealing with tasks. Tasks are the main list of tasks to executed by this role. You can read up about the rest from the ansible documentation on roles.

Roles breakdown

setup

This role adds the apps prerequisites to the image

---
- name: Adding python 3.6 repository
apt_repository:
repo: ppa:deadsnakes/ppa
state: present
- name: Installing python 3.6
apt:
name: python3.6
update_cache: yes
- name: Installing python3-pip nginx python3.6-gdbm
apt:
name: "{{ item }}"
state: present
with_items:
- python3-pip
- nginx
- python3.6-gdbm

name — This is description given to a block and is displayed in the log when that block is run.
apt_repository — This directs ansible to add or remove a repository in the OS packages. It is specific to Ubuntu or Debain systems. In implementation we are adding a python3.6 repository.
repo — This is a git parameter and points to the repository you want to clone
state — This tells ansible to get the source in its latest state.
apt — This is used to manage apt packages. Here it is used to install several packages.
update_cache — This is equivalent to the apt-get update command which downloads the package lists from the repositories and “updates” them to get information on the newest versions of packages and their dependencies.
with_items — The implementation of apt with item and with_items is a way to install several packages. It’s a standard loop for handling repeat tasks as one.

app

This role handles adding the project directory to the image and setting it up

- name: Create Project directory
file:
path: /var/www/project
state: directory
- name: Clone the application repo
git:
repo: https://github.com/Thegaijin/RecipeAPI.git
dest: /var/www/project/RecipeAPI
version: api_defence
- name: Install the virtualenv
command: pip3 install virtualenv
- name: create the virtualenv
command: virtualenv /var/www/project/env -p python3.6
creates="/var/www/project/env"
- name: Install the project requirements
pip:
requirements: /var/www/project/RecipeAPI/requirements.txt
virtualenv: /var/www/project/env
- name: create .env file
shell:
chdir: /var/www/project/
creates: \.env
cmd: |
sudo bash -c 'cat > \.env <<EOF
export SECRET_KEY='wqrtaeysurid6lr7'
export FLASK_CONFIG=development
export DATABASE_URL='postgresql://thegaijin:12345678@cp3-db-instance.cjfdylbgjjyu.us-west-2.rds.amazonaws.com:5432/recipe_db'
EOF'

file — This sets attributes of files, symlinks, and directories, or removes files/symlinks/directories. In our case we are creating a directory so it sets attributes of a directory.
path — This tells ansible the name and where to create the directory
state — This tells ansible that it should create a directory. When it is a directory as is our case, all intermediate subdirectories will be created if they do not exist.
git — As you might have guessed, is how ansible manages git checkouts of repositories to deploy files or software.
dest — This is also a git parameter which points to the directory where you want the repository to be cloned to
version — This is another git parameter which states the branch you want to checkout.
command — Runs a command from a remote node. In this implementation I am install virtualenv using pip3.
creates — This is a parameter that tells command to create a file or directory.
pip — This manages python dependencies. In this implementation I am using it to install the dependencies in my requirements file.
requirements — This parameter tells pip where to find the list of dependencies, which is the requirements.txt file.
virtualenv — This parameter gives pip the path to the virtualenv directory to install into.
shell — This is similar to using command except it runs the commands through a shell in the remote node.
chdir This parameter changes directory into the specified directory.
cmd — What follows this parameter is the commands to be executed in the shell. In case of several commands, you use the pipe character. In this implementation it is being used to create a .env file with the environment variables.

webserver

This role hands configuration of nginx which acts as the apps reverse proxy.

---
- name: Starting nginx on boot
service:
name: nginx
enabled: yes
state: started
- name: Removing nginx default.conf
command: rm -rf /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default
- name: Adding nginx configuration
shell:
chdir: /etc/nginx/sites-available/
creates: default
cmd: |
sudo bash -c 'cat > default <<EOF
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host \$host;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
}
}
EOF'
- name: Creating symbolic link
command: ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/
- name: Reload nginx
command: systemctl restart nginx

service — This is used to control services on remote hosts. The service in this case is nginx.
enabled — This is a service parameter to set the service to start on boot
state — A service parameter with several choices that will implement the choice given to it as necessary for example in this case the choice is started, if nginx is not running, it will trigger the service to start it.

In this role I start nginx, delete the default configuration files, create anew configuration file through the shell using bash, create a symlink for the new configuration in the sites-enabled nginx folder and finally reload nginx to load the new configurations.

service

This role creates a systemd service to start the application and keep it running.

---
- name: Create start script
shell:
chdir: /var/www/project/RecipeAPI/
creates: startenv.sh
cmd: |
cat > startenv.sh <<EOF
#!/bin/bash

cd /var/www/project/

source env/bin/activate
source .env
cd RecipeAPI

gunicorn manage:app
EOF
- name: Create start service
shell:
chdir: /etc/systemd/system/
creates: recipe.service
cmd: |
cat > recipe.service <<EOF
[Unit]
Description=recipe startup service
After=network.target
[Service]
User=ubuntu
ExecStart=/bin/bash /var/www/project/RecipeAPI/startenv.sh
Restart=always
[Install]
WantedBy=multi-user.target
EOF
- name: Change the files permission
shell: |
sudo chmod 744 /var/www/project/RecipeAPI/startenv.sh
sudo chmod 664 /etc/systemd/system/recipe.service

I create a start script here named startenv.sh using bash through the shell and as well as the systemd service named recipe.service which I pass the start script and change the files permissions giving the script read, write and execute permissions and the service read and read permissions.

start

The start role is the last piece to the puzzle. It reloads the all systemd daemons to pick up any config changes, enables the recipe.service to start on boot and starts the service.

---
- name: Start the service that keeps the app running
shell: |
sudo systemctl daemon-reload
sudo systemctl enable recipe.service
sudo systemctl start recipe.service

Once this has been setup, move over to this repo to follow the instructions on how to run the configurations.

Link to the running app deployed through the above steps: Yummy Recipes API

--

--