A Comprehensive Beginner’s Guide to Automating with Ansible
A version of this tutorial appears on Purple, Rock, Scissors.
One of the first things we learn as developers is not to repeat ourselves. Environment setup shouldn’t be any different.
I found myself doing the same thing over and over again each time I needed to set up a client’s server. There were tiny differences with each project, but for the most part, I was doing the same thing. Compound this repetition with setting up local development environments using the same base Vagrant box, and it became a bit of a looping nightmare.
We’ve been using Vagrant at PRPL for local development environments for a while, but because every project has its own set of technical requirements, we found ourselves having to tweak each box just a little. Since we were using the same base box for every project, it caused some divergence with the production/staging environments, as well.
In an attempt to solve both the headache of getting local development environments set up and configuring production/staging environments the same way, we knew we needed to automate this process. After looking at a few different solutions, we decided on Ansible.
What makes Ansible the easy choice was its simple setup, that all commands were over SSH, and that it was something we could use for deployments, as well.
To show you how easy it is to get started, we’ll begin with getting your local development environment set up. Later in the tutorial we’ll show a more complex example that includes provisioning a remote server and a simple deployment of a project. Here we go!
Prerequisites
You’ll need to have both Vagrant and Ansible installed before you can begin.
Initialize Vagrant project
Create and move to the directory that you want to build this project.
$ mkdir -p ~/Sites/ansible
$ cd ~/Sites/ansible
Next we are going to initialize our Ubuntu box with Vagrant.
$ vagrant init ubuntu/trusty64
Vagrant should have created a file Vagrantfile
for you in your current directory. If you take out all the comments, you should have something like this:
Vagrant.configure(2) do |config| config.vm.box = "ubuntu/trusty64"end
MODIFY VAGRANTFILE FOR YOUR PROVISIONING
Next, you’ll set the network variable for the box you are working with. I typically like to use a private network, but using localhost is also fine.
Vagrant.configure(2) do |config| config.vm.box = "ubuntu/trusty64"
config.vm.network "private_network", ip: "192.168.33.20"end
By default, Vagrant likes to set your shared directory, but I like to explicitly set that. Add the following to your Vagrantfile
.
config.vm.synced_folder ".", "/var/www"
Add Ansible as your provisioner.
...
config.vm.network "private_network", ip: "192.168.33.20" config.vm.provision :ansible do |ansible|
ansible.playbook = "playbook.yml"
end
Now your file should look something like this:
Vagrant.configure(2) do |config| config.vm.box = "ubuntu/trusty64"
config.vm.network "private_network", ip: "192.168.33.20"
config.vm.synced_folder ".", "/var/www" config.vm.provision :ansible do |ansible|
ansible.playbook = "playbook.yml"
endend
At bare minimum, this should be enough to get you started using Ansible and Vagrant together.
Set up your Ansible playbook
We’re going to make this first setup easy-peasy and then build upon it. For this first part, we aren’t going to worry about best practices as much as we are about understanding what’s happening and getting up and running.
Create a playbook.yml
file in the root of your current project directory.
touch playbook.yml
Since YAML can be very fussy, make sure you pay attention to all spaces and indentations. You can also check Ansible’s YAML syntax guide here.
LIST HOSTS
Your playbook will need to start with what hosts to run the commands for. Right now we are going to start with all
. The method to run as sudo: true
has been deprecated, so we are using the new become
arguments. We need this so Ansible knows how to perform the tasks we are giving it.
So in your empty playbook.yml
, create the following lines:
---
- hosts: all
become: true
remote_user: vagrant
ADD OUR FIRST TASKS
To get our local box up and running as a LEMP stack, we’ll need to do the following tasks:
- Update apt
- Install Nginx
- Install PHP
- Install MySql
All tasks follow a similar syntax.
tasks:
- name: Put a description here of what the task is doing
[module]: [options]
So in your playbook, we’ll begin to build the task list.
tasks:
- name: Update apt cache
apt: update_cache=yes
Your playbook.yml
file should now look like this:
---
- hosts: all
become: true
remote_user: vagrant
tasks:
- name: update apt cache
apt: update_cache=yes
We can run the next set of tasks individually or in a loop. Since a loop is more efficient, we’ll go with that. Loops can be written in a number of different ways, but seeing that we are performing a simple series of installs, we’ll use a simple loop.
- name: Install Nginx, PHP, and MySql
apt: name={{item}} state=present
with_items:
- nginx
- php5-fpm
- php5-mysql
- mysql-server
- php5-mcrypt
- php5-gd
- php5-curl
Your playbook.yml
file should now look like this:
---
- hosts: all
become: true
remote_user: vagrant
tasks:
- name: Update apt cache
apt: update_cache=yes
- name: Install Nginx, PHP, and MySql
apt: name={{item}} state=present
with_items:
- nginx
- php5-fpm
- php5-mysql
- mysql-server
- php5-mcrypt
- php5-gd
- php5-curl
At this point, you should be able to run vagrant up
and have a working web server to work with. Typically project setup doesn’t stop there. Next we’ll work with templates and modification of files on our Vagrant box.
SETUP VHOST
We’re going to work off the assumption that there is only one project being served in this box, but you can make modifications to serve others if you’d like. Next we’re going to create a template file to work from.
$ mkdir templates
$ touch templates/nginx.conf.j2
Save the following to templates/nginx.conf.j2
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on; root {{document_root}};
index index.php index.html index.htm; server_name {{url}}; location / {
try_files $uri $uri/ =404;
} error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
} location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
You’ll notice in the template file lines like:
root {{document_root}};
{{document_root}}
is a variable we are going to set in our playbook.yml
After remote_user
and before tasks
, you’ll set vars
.
vars:
url: project.dev
document_root: /var/www
Your playbook.yml
should now look like the following:
---
- hosts: all
become: true
remote_user: vagrant
vars:
url: project.dev
document_root: /var/www
tasks:
- name: Update apt cache
apt: update_cache=yes
- name: Install Nginx, PHP, and MySql
apt: name={{item}} state=present
with_items:
- nginx
- php5-fpm
- php5-mysql
- mysql-server
- php5-mcrypt
- php5-gd
- php5-curl
At this point, we have everything we need to add our additional task of creating the vhost.
Go back to the tasks
list in playbook.yml
and add:
- name: Copy across virtual host config
template:
src=templates/nginx.conf.j2
dest=/etc/nginx/sites-available/{{url}}
We also have to enable the config file, so let’s add another task. Notice we’re using a new module file
, and we are setting the state of that file as link
.
- name: Enable site
file:
src=/etc/nginx/sites-available/{{url}}
dest=/etc/nginx/sites-enabled/{{url}}
state=link
We’ve set this configuration as the default, and therefore need to remove the NGINX default. To do that, we will use the same module to unlink the default site.
- name: Remove default conf link
file:
path=/etc/nginx/sites-enabled/default
state=absent
Last, we need to restart NGINX to get everything working.
- name: restart nginx
service: name=nginx state=restarted
Your playbook.yml
should now look like the following:
---
- hosts: all
become: true
remote_user: vagrant
vars:
url: project.dev
document_root: /var/www
tasks:
- name: Update apt cache
apt: update_cache=yes
- name: Install Nginx, PHP, and MySql
apt: name={{item}} state=present
with_items:
- nginx
- php5-fpm
- php5-mysql
- mysql-server
- php5-mcrypt
- php5-gd
- php5-curl
- name: Copy across virtual host config
template:
src=templates/nginx.conf.j2
dest=/etc/nginx/sites-available/{{url}}
- name: enable site
file:
src=/etc/nginx/sites-available/{{url}}
dest=/etc/nginx/sites-enabled/{{url}}
state=link
- name: Remove default conf link
file:
path=/etc/nginx/sites-enabled/default
state=absent
- name: restart nginx
service: name=nginx state=restarted
At this point you can run vagrant up
to get your box created and provisioned with a LEMP stack.
If you’ve already run vagrant up
prior in this tutorial, you can update the provisioning of your Vagrant box by running vagrant provision
.
You should see an output of Vagrant doing its setup, and then you’ll see Ansible’s output of the task names. If you still have output set to verbose, you’ll see a bunch of information. You can always toggle this on or off.
Test your setup
Now you’re at a point where you can see if it all worked.
CREATE AN INDEX FILE
In the root of your project create an index.html
file and add some junk to it.
<p>I'm doing an ansible thing and stuff</p>
If you then navigate to the IP address you set in the Vagrantfile
, ours is 192.168.33.20
, you should see the output of the index file you just created.
Now we’re going to make some modifications so Ansible can be run manually from the current directory. We will also get set up so we can provision our production and stage environments.
Restructure project
Right now you have a few files floating around and all of your tasks are listed in the one playbook file. Although this works, it isn’t ideal for reuse. We’re going to change that.
In the root of a basic web project, you typically have all of your code files already organized in the fashion that suits that particular project type. Since Ansible is just a piece of that project, we are going to add an Ansible directory and move the related Ansible files over.
$ mkdir ansible
$ mv playbook.yml ansible/playbook.yml
$ mv templates ansible/templates
We’re also going to change the structure to more closely align with Ansible’s best practices. Below is a sample structure that is used in Ansible’s documentation.
production # inventory file for production servers
staging # inventory file for staging environmentgroup_vars/
group1 # here we assign variables to particular groups
group2 # ""
host_vars/
hostname1 # if systems need specific variables, put them here
hostname2 # ""library/ # if any custom modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tierroles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
defaults/ #
main.yml # <-- default lower priority variables for this role
meta/ #
main.yml # <-- role dependencies webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
This is a bit much for creating just a small web project, so we’re going to modify the structure for our general purposes.
hosts # inventory filesite.yml # master playbookroles/
nginx/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
nginx.conf.j2 # <------- templates end in .j2
mysql/ # same kind of structure as "nginx" was above
php/ # ""
The above list isn’t completely comprehensive, but it gives us a start. Go ahead and mirror this structure now; we’ll fill in the files as we move along.
SET UP LOCAL ANSIBLE CONFIGURATION
If you simply run the ansible
command, you should see a warning that you don’t have any target hosts, ERROR! Missing target hosts
. That’s because Ansible is looking at your current machine for a hosts file to house your inventory. What we want is to have Ansible look at this project specifically for our inventory. To do that, we’ll set up the hosts
file locally in this project.
If you haven’t already done so, create a hosts
file in the Ansible directory.
$ touch ansible/hosts
Now you can add your Vagrant box as a host. You can use either the IP address or a local domain that you set up in your machine’s hosts /etc/hosts
. Keep in mind we’re talking about two different hosts
files.
We’re going to give our Vagrant box an alias and also assign it to a group called local
.
In your Ansible hosts
file, add the following lines:
[local]
vagrant ansible_host=192.168.33.20
If you added an alias like project.dev
or something similar in your /etc/hosts
file and had it point to the IP, then you could do something like the following:
[local]
project.dev
Next, we’re going to tell Ansible to use this hosts
file when we run it locally from this project.
Create an ansible.cfg
file in the Ansible directory.
$ touch ansible/ansible.cfg
Modify the config to point Ansible to our local hosts
file.
[defaults]
inventory = ./hosts
You can use this config to set other Ansible defaults for this project.
Now we tell Vagrant to use our local hosts
file, as well. Add the following in the config.vm.provision
block:
ansible.limit = "local" #limit vagrant to local group
ansible.inventory_path = "./ansible/hosts"
Rename the ansible/playbook.yml
file to ansible/site.yml
.
$ mv ansible/playbook.yml ansible/site.yml
We also want to modify the site.yml
file so that the play we are running is just for the local
group that we just set up in our hosts
file.
Change the — hosts
value from all
to local
.
---
- hosts: local
become: true
remote_user: vagrant
...
Update the Vagrantfile
so that you are pointing it to the right playbook.
ansible.playbook = "./ansible/site.yml"
At this point, you can test your setup by running vagrant provision
to make sure Vagrant still works properly with the changes you’ve made. It’s a good idea now to also test your Ansible command.
Run the following from the Ansible directory:
$ ansible-playbook site.yml --list-hosts
You should see an output of your hosts that you created. If you want to see a full list of commands available to you, run the following:
$ ansible-playbook site.yml --help
Now that we have the configuration set up, we can start modifying our structure to make the tasks reusable and implement the ability to configure our staging and production environments.
CREATE ROLES
Roles are just a grouping of tasks and dependencies of those tasks. You can separate roles in various ways, but we’re going to separate our roles based on the different parts of LEMP.
Starting with NGINX, we’re going to move all our tasks and templates.
# ansible/roles/nginx/tasks/main.yml
---
- name: Install Nginx
apt: name=nginx state=present update_cache=true
- name: Copy across virtual host config
template:
src=templates/nginx.conf.j2
dest=/etc/nginx/sites-available/{{url}}
- name: Enable site
file:
src=/etc/nginx/sites-available/{{url}}
dest=/etc/nginx/sites-enabled/{{url}}
state=link
- name: Remove default conf link
file:
path=/etc/nginx/sites-enabled/default
state=absent
- name: restart nginx
service: name=nginx state=restarted
Note: we’ve added updating the apt cache in the apt task directly. We’ll do this for all of our other apt tasks as well.
# ansible/roles/nginx/templates/nginx.conf.j2
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on; root {{document_root}};
index index.php index.html index.htm; server_name {{url}}; location / {
try_files $uri $uri/ =404;
} error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
} location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Next, we’ll take a look at our handlers
directory. Handlers in Ansible are way to run a task only if something has changed. We’re going to add restarting NGINX to our handlers so we aren’t restarting it every time we run a play.
# ansible/roles/nginx/handlers/main.yml
---
- name: restart nginx
service: name=nginx state=restarted - name: reload nginx
service: name=nginx state=reloaded
Then we’ll modify the ansible/roles/nginx/tasks/main.yml
to use a handler instead.
---
- name: Install nginx
apt: name=nginx state=present update_cache=true
- name: Copy across virtual host config
template:
src=templates/nginx.conf.j2
dest=/etc/nginx/sites-available/{{url}}
- name: enable site
file:
src=/etc/nginx/sites-available/{{url}}
dest=/etc/nginx/sites-enabled/{{url}}
state=link
notify:
- reload nginx
- name: Remove default conf link
file:
path=/etc/nginx/sites-enabled/default
state=absent
notify:
- reload nginx
That’s it for NGINX. We’ll move the rest of the tasks to MySQL and PHP.
# ansible/roles/mysql/tasks/main.yml
---
- name: Install MySql
apt: name=mysql-server state=present
with_items:
- mysql-server# ansible/roles/php/tasks/main.yml
---
- name: Install PHP
apt: name={{item}} state=present update_cache=true
with_items:
- php5-fpm
- php5-mysql
- php5-mcrypt
- php5-gd
- php5-curl
- name: ensure php5-fpm cgi.fix_pathinfo=0
lineinfile: dest=/etc/php5/fpm/php.ini regexp='^(.*)cgi.fix_pathinfo=' line=cgi.fix_pathinfo=0
notify:
- restart php5-fpm
- restart nginx
Create your handler for PHP
.
# ansible/roles/php/handlers/main.yml
---
- name: restart php5-fpm
service: name=php5-fpm state=restarted
Now that we’ve moved everything over, we can modify our main site.yml
file. We’ll remove all the tasks that we moved over and replace it with a list of roles.
roles:
- nginx
- php
- mysql
Your site.yml
file should now look like this:
---
- hosts: local
become: true
remote_user: vagrant
vars:
url: project.dev
document_root: /var/www
roles:
- nginx
- php
- mysql
Now we can test to make sure that everything is working correctly.
$ vagrant provision
If you didn’t get any errors, then you are in a place to start adding other servers to your Ansible setup.
ADD A STAGING SITE
In your site.yml
file, you’ll just copy what you have for local
and change the name to staging
.
You’ll modify the remote_user
and the vars
to suit your needs.
remote_user: root
vars:
url: project.stage
document_root: /var/www
Your site.yml
should look something like this:
---
- hosts: local
become: true
remote_user: vagrant
vars:
url: project.dev
document_root: /var/www
roles:
- nginx
- php
- mysql
- hosts: staging
become: true
remote_user: root
vars:
url: project.stage
document_root: /var/www
roles:
- nginx
- php
- mysql
In your ansible/hosts
file, add a staging
group.
[local]
vagrant ansible_host=192.168.33.20[staging]
droplet ansible_host=104.131.68.167 # you'll have to modify this for your needs
Couple of things to check before you try to run Ansible to provision a remote server:
- Make sure you have your SSH key on that server.
- Make sure the user that you set as your
remote_user
has sudo permissions.
If all that checks out, you should be able to provision your remote server with one command.
$ ansible-playbook -i hosts site.yml --limit=staging
Just to explain what’s happening here:
$ ansible-playbook -i [inventory to use] [playbook to use] --limit=[limit to specific role]
Good? You should now have a staging server that’s provisioned just like your local environment.
Last, but not least, we’re going to do a very simple deploy. It’s basically an rsync
. There are a lot of different ways we can do a deploy with Ansible, but for the sake of time and simplicity we’re going to keep it light.
DO A DEPLOY THING
We’re going to create a deploy playbook at the same level we have our site.yml
file.
$ touch ansible/deploy.yml
In the deploy.yml
just copy over the first part of the play that you have for staging
in your site.yml
file.
---
- hosts: staging
become: true
remote_user: root
vars:
url: project.stage
document_root: /var/www
Then we’ll add a simple rsync
for our project.
---
tasks:
- name: Simple rsync of project
synchronize:
src: ../
dest: "{{document_root}}"
rsync_opts:
- "--exclude=.vagrant"
- "--exclude=ansible"
- "--exclude=Vagrantfile"
Your deploy.yml
file should look something like this.
---
- hosts: staging
become: true
remote_user: root
vars:
url: project.stage
document_root: /var/www
tasks:
- name: Simple rsync of project
synchronize:
src: ../
dest: "{{document_root}}"
rsync_opts:
- "--exclude=.vagrant"
- "--exclude=ansible"
- "--exclude=Vagrantfile"
Now let’s run it.
$ ansible-playbook -i hosts deploy.yml --limit=staging
If all went well, you should be able to see your html
up on your site.
At this point we should have a working Ansible playbook that gives us a LEMP stack with a simple deploy. It solves our problem of provisioning our local and production/staging environments the same, and makes sure that all the little differences in each tech stack can be tracked. Even better, we can keep a repository of all these different setups for reuse in the future for different projects!
Our ability to automate server setup saves us time and makes sure we’re being consistent.
As with anything, there is a lot more that we can do with the current playbook we’ve just set up. We can add database migration, a more robust code deploy that works with Git — the options are endless.
Got questions?
Feel free to share your thoughts and feedback below. If there is anything in particular that you’d like to see as a follow up, please let us know!