Terraform and Ansible: Deploying a Multi-Region Architecture.
1. What we’ll be building.
2. Prerequisites.
1. What we’ll be building.
All infrastructure will be deployed via the AWS command line, Terraform, and Ansible. We’ll Start off by deploying VPC’s and public subnets in two different regions. We’ll be deploying VPC peering between both regions. To support communication between both VPC’s, we’ll alter the route tables. We’ll be attatching internet gateways to both of the VPC’s in both regions to allow our EC2 instances to communicate to the internet. We’ll next deploy our EC2 instances in both regions. We’ll configure a main Jenkins instance and two worker instances. We’ll be using Anisble to install the required software for Jenkins and apply configuration to integration with the worker instances and the main instance. We’ll be adding an application load balancer in front of our main EC2 instance. Next we’ll generate an SSL certificate using AWS Certificate Manager and attatch it to our Application Load Balancer to allow HTTPS traffic. The ACM certificate we’ll be verified against a domain name in a Route 53 publicaly hosted zone. Lastly we’ll route our DNS queries via the Route 53 domain towards our Application Load Balancer’s DNS name to allow traffic from the outside.
2. Prerequisites.
A registered domain name via AWS Route 53.
Setting up IAM permissions via the AWS console.(For the demo, I set up my permissions for programmatic access with administrator access)
3. Let’s build our infrastructure!
For this demo, I’ll try to be as thorough as possible and break it down into chunks while we’re deploying our infrastructure.
3a. Configuring our backend.
We will first create an S3 bucket via the AWS CLI. This will be the bucket that we use to configure our Terraform backend and Store our State file.
Storing a state file remotely offers many benefits such as:
Safer storage: Storing state on the remote server helps prevent sensitive information. State file remains same but remote storage like S3 provides a layer to security like making S3 bucket private and giving limited access.
Auditing: Invalid access can be identified by enabling logging.
Share data: Remote storage helps share state file with other members of team.
Lets make a new directory. mkdir aws_ansible_tf
Change into that folder. cd aws_ansible_tf
In the command line. Type aws s3api create-bucket — — bucket <your_unique_bucket_name> — — region us-east-2 — — create-bucket-configuration LocationConstraint=us-east-2.
This will create a bucket via the AWS CLI. You may choose to store your bucket in a region other than us-east-2. See my example below.
Next, let’s configure our backend via Terraform code. I’ll be using vim for this demo for my text editor.
vim backend.tf
Below the following code configures our backend remotely using our S3 bucket that we have created.
Notice the profile refers to our default aws credentials that were configured in the prerequisites.
Quit and save :wq!
Type: terraform init
This will initialize your backend and configure all the necessary plugins.
Verify in the AWS console that your S3 bucket and backend have been created
Providers Setup
Providers are the building blocks for Terraform. They are the source code for all of the Terraform Resources. They are responsible for understanding API interactions and exposing resources.
We’ll be setting up Multiple Providers for a multi-region deployment.
First let’s set up some variables for our providers to refer to.
In your aws_ansible_tf directory, open up a new file called variables.tf
We’ll be setting up three variables. The first variable will be called profile that refers to our default AWS credentials.
The second will be called region-master to setup our main provider in the us-east-2 region.
The third will be called region-worker to set up our worker region in us-west-2
Note: You may use whatever regions you prefer.
Refer to my code below.
Next let’s set up our providers. Open a new file called providers.tf
Notice how we use our variables that we have set up in our variables.tf file for our profile and regions. Also take note that we use an alias for Multiple provider configurations. Without an Alias, we would not be able to set up multiple providers.
Refer to my code below to set up our providers.tf file.
Next, we can go ahead and run a terraform init to download the Providers that we have specified.
Setting Up Our Network.
Next we’ll begin to setup of VPC’S, Subnets, and Internet Gateways for our project. We’ll be deploying two public subnets in us-east-2 and one public subnet in us-west-2. We’ll also create and attach two Internet Gateways on both VPC’S.
Let’s open up a new file called networks.tf and we’ll begin to set up our VPC’S
Below is the code for setting up our two VPC’S.
Take note how we use the two different providers that match our alias that were given in the providers.tf file.
We also give two different cidr blocks because we don’t want one overlapping the other. We also enable dns support and hostnames to be true.
Next, we’ll begin to setup our internet gateways for our VPC’S to communicate with the internet.
In the same networks.tf file under our VPC’S, we can write our code for our Internet Gateways.
Below is the code below for our Internet Gateways.
Again make note of our alias’ that we are using and the unique VPC id that we created when coding our VPC.
Next, we’ll create a data resource to fetch all the availability zones in our master region. The Availability Zones data source allows access to the list of AWS Availability Zones which can be accessed by an AWS account within the region configured in the provider.
We can use the code below to write under our code for our Internet Gateways.
Next we can begin to create our public subnets for both regions.
Under our data resource, begin to write out the code below for our public subnets in our master region.
Take note of the element function that fetches the list of our availability zones with an index beginning at zero for the first availability zone in our list. The same applies to the second public subnet as we give it an index of 1 to grab the second availability zone in our list.
Next, let’s begin to write out the code below for our public subnet in the us-west-2 region.
Next, let’s set up a peering connection between our two VPC’S. A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network.
Begin to write out the code below to setup our VPC peering connection.
Next, we’ll need to write out some code to accept our peering request. Below is the code for our VPC peering connection accepter.You can accept a VPC peering connection requested between one of your VPC’S and another VPC to enable communication between them.
This action updates the state of the connection from pending-acceptance
to active
. Only the owner of the accepter VPC can accept a connection request.
Next, we’ll begin to create some route tables. The following code below creates a route table in our VPC in us-east-2.
Take note that the first route is to route to our internet gateway that we created. The second route is towards our subnet that we created in us-west-2. We do this so that traffic coming from this subnet can come into our VPC in us-east-2.
Next, we’ll begin to modify the master route table in us-east-2. We do this because by default, a route table will be created for us and we want to replace it with the one that we want.
The following code is below.
Next, we’ll create a route table for our us-west-2 region. The first route will be directed towards our internet gateway in our VPC in us-west-2 and the second route will be directed towards a public subnet that we created in our VPC in us-east-2.
We’ll also update the default route table with the changes that we made above. Here is the code below.
Security Groups.
Next we’ll begin to configure our security groups. Open up a new file called security.tf
The first security group that we will create is for our application load balancer that we will create shortly. This will allow traffic from anywhere on the internet on port 443(HTTPS) and port 80(HTTP) and egress(outbound) communications from all ip addresses. Take note that -1 in egress means all protocols.
The second security group is for our EC2 instance named Jenkins master.
Take note that we are allowing ssh access on our EC2 instance from an external variable that we will create. On the second ingress rule we are allowing access from port 8080 from our load balancer’s security group id. On the third ingress rule, we allow traffic from any protocol coming from the subnet range in our VPC in our us-west-2 region.
Next, let’s create a variable for port 22 in our in our EC2 instance in our us-east-2 region.
Add the variable external_ip to allow ssh access from anywhere. This is not best practice but for demo purposes only.
Instances and Ansible
Before we begin, we’ll open up variables.tf and create three more variables that we’ll be using. We’ll be using these variables as we build out our instances. The variable worker-count refers to the number of instances that will be spun up in us-west-2. The instance-type will use the t2.mirco size as a default. The third refers to our webserver port that we’ll be using which is 80.
We’ll now begin to configure our EC2 instances. Open up a file named instances.tf
We’ll be using the data source aws ssm parameter(AWS Systems Manager Parameter Store) to fetch the id’s of our EC2 instances. Notice name = is the endpoint of the alias name such as amazon linux 2.
Next we’ll need to configure our key pairs for our EC2 instances. We’ll do this through the command line.
The command that you will need to run is ssh-keygen-t rsa.
You will be prompted to enter a fill in which to save key, a passphrase and passphrase again. Keep hitting enter through these prompts.
You now have a public and private key. Your private key is stored in home/.ssh/id_rsa and public key is stored at home/.ssh/id_rsa.pub
We can now go back to our instances.tf file in configure our key pairs for our EC2 instance in our us-east-2 region and us-west-2 region.
Here is the following code below.
Notice how we name our key pair my_jenkins.(You can name yours any you would like and the path to our public key.
Before we spin up our EC2 instances, we will download an Ansible configuration file which configures the Ansible’s command’s behavior. We also configure Ansible playbooks for configuration management for our EC2 provisioners.
Within our aws_ansible_tf folder, make a new folder called ansible_templates.
mkdir ansible_templates
Change into the ansible_templates folder
cd ansible_templates
This folder will house our Ansible templates and configuration file.
First lets download the ansible configuration file. You can type the command:
wget https://github.com/ansible/ansible/blob/devel/examples/ansible.cfg
You should now have your Ansible configuration file downloaded.
Open up your ansible.cfg file and we’ll add a few lines.
We’ll add these three lines. The first points ansible configuration to the configuration file of the dynamic inventory of aws provided by ansible. The second is to actually enable the plugins needed for aws ec2.
Next we’ll create a configuration file for creating dynamic inventory that we just mentioned in the ansible configuration file.
Let’s make directory inventory_aws: mkdir inventory_aws
Change into directory.
Type in the command wget https://raw.githubusercontent.com/linuxacademy/content-deploying-to-aws-ansible-terraform/master/aws_la_cloudplayground_multiple_workers_version/ansible_templates/inventory_aws/tf_aws_ec2.yml
Next we’ll need to install the boto 3 python sdk for aws for which Ansible dynamic inventory requires.
Let’s issue the command pip3 install boto3 — — user
Once successfully installed, the plugin will start to work.
Next, let’s create a sample file for our master EC2 instance and open it up. Name it jenkins-master-sample.yml
The playbook below uses a host argument to pass in a variable which is going to pass to the playbook what host it should run on. We are also letting the playbook know to become to remote ec2-user and become the root user because the action that we want to take is to run. We’re also adding two plays to the playbook. Notice the first is to install the apache web server and the state will be present. The second play is to start and enable the Apache web service.
Next, let’s open a new file called jenkins-worker-sample.yml.
The playbook for this file will be similar with the only difference of being that we are installing the JSON parser.
We can go back up one directory cd ..
We can now start working on configuring our EC2 instances. Open instances.tf again
For our first instance in the master region. Notice the alias that we use in provider. The ami we are fetching is from the data source using the ssm parameter store that we configured. We’re also mapping our key pair that we created. We’re also enabling public ip address so we can test connectivity. We’re also putting it in the public subnet one that we created. Notice the provisioner that we created. This tells us that its a local exec provisioner. The first command has the resource wait until it hits the ok status. The next Ansible command will be able to connect via ssh and run a playbook against it.
Next we’ll create our second instance in the worker region. Notice that we put this instance in the public subnet that we created in us-west-2. We also have a local exec provisioner similar to the first instance.
Load Balancer
Now that we have Ansible and our instance configured, we can start to create our application load balancer to put in front of our instance in our main region. This load balancer will route traffic to our target groups. In this case it will be our EC2 instances.
Open up alb.tf and let’s start creating the load balancer itself.
Notice we put the load balancer in the master region in both subnets for high availability.
Next we’ll configure the target group for our load balancer.
Notice we are targeting instance. We are passing in the VPC id of which the load balancer resides. The protocol for which we’ll be routing will be over HTTP. We also have a health check block
Next we’ll configure our listeners. Notice how we configure listeners for HTTP and HTTPS.
Next we’ll be attatching our target group attachment. This is how we’ll be attatching our target group to our master EC2 instance.
DNS and ROUTE 53.
For this next demonstration, we’ll be setting up HTTPS and a route 53 record. Incoming traffic will first hit our domain which will be hosted in AWS route 53. You will need a public domain for this part.
Before we begin, lets open up variables.tf and paste in your domain name. Here is mine for example. Please do not forget the . at the end.
Let’s begin to configure Route 53.
Open up a new file called dns.tf
Let’s add the data resource aws_route53_zone
Next we’ll begin to set up our certificate management system.
Let’s create a file called acm.tf
The code below creates the acm resource certificate. The next resource code validates the ACM certificate via Route 53.
Next we’ll go back to dns.tf and paste in the next resource code. This code will create a record and certification validation.
Next we’ll create an alias record towards our ALB from Route 53. Here we’re telling Route 53 to route any request thats hitting our domain name onto the dns name of our load balancer.
Outputs.
Let’s create some outputs.
Open up outputs.tf
Here we want the output of our main and worker ip addresses. We also want the output of our ALB DNS name and the url of aws route 53 record name.
Deploy
It’s time to deploy our infrastructure.
Run a terraform fmt: The terraform fmt command is used to rewrite Terraform configuration files to a canonical format and style.
Run a terraform validate: The terraform validate command validates the configuration files in a directory.
Run a terraform plan: The terraform plan command creates an execution plan with a preview of the changes that Terraform will make to your infrastructure.
Finally Run a terraform apply:
The terraform apply
command performs a plan just like terraform plan
does, but then actually carries out the planned changes to each resource using the relevant infrastructure provider's API.
We can verify our Infrastructure and Ansible playbooks deploying through the command line and also through the AWS console.
Finally run a terraform destroy as we have completed our demo.
Notes: Don’t panic if it does not run on the first try. You may need to troubleshoot some errors. I tried to be as thorough as possible. Thanks for checking it out.
To view the repo for this demo, check out my Github page here.
Connect with me on LinkedIn here.