Deploying to AWS with Ansible and Terraform
Terraform is a powerful provisioning tool that supports immutable infrastructure and a declarative language
I am sure about the fact that by now, most of us have used public cloud services like AWS, Azure, & Google Cloud. Well, if not all three, at least aws for sure(because aws is the biggest player in the public cloud service offering). Creating your infrastructure on top of these public cloud is pretty straightforward and easy, if done manually, by using their respective web console. However, it is not that simple to automate the infrastructure building process in a reusable fashion. Please keep the fact in mind that we are talking about automating the “infrastructure” here and not about your applications and services running on your servers.
When I say infrastructure, I am referring to the below in the cloud..
- Networks
- Subnetworks
- Firewalls
- LoadBalancers
- Storage
- Public IPs
- DNS Entries.. And much more..
There are already many configuration management tools out there in the market, that can automate your Applications and Services running inside instances(VMs). For example, Puppet, Chef, Ansible, Salt etc can be used for automating your applications and services running inside your VM (or in other words…your app running in the infrastructure).
We need a method and a reusable process to build infrastructure using code. The idea is to basically treat the infrastructure components that we listed above, in the same manner that we treat our application(ie: Using code). Hence the name “Infrastructure As code”. The tool that we are going to discuss today falls under IaC (Infra as Code). It is called “Terraform”
So basically the principles that we generally apply to software development can then be applied to infrastructure as well. Like version controlling, Infrastructure can be shared(because its code), can go back in time (because we can go to the previous version).
You can declare the required state of your infrastructure using Terraform, and it will take care of the underlying complexities to create it.
Let’s imagine you want to create an AWS instance, and then attach a public IP(elastic ip), and then finally add a DNS entry for your instance. As I mentioned earlier, you simply specify the end state that you want using terraform.
- A Public IP
- A DNS Entry
- An Instance
There are dependencies between each step that we have above. Terraform will calculate the dependencies and create each of the resources above in the correct order. Let’s think about this for a moment. An instance should be created before we can attach a public IP to it. The public IP should be created before adding the DNS entry. Which means the order is important here, and terraform will take care of this by building a graph internally.
So basically terraform will provision your infra in a cloud of your interest (terraform also falls under the umbrella of tools called as provisioners). Hence it can be also be called a cloud provisioner.
Why Can’t We Use Puppet, Chef or Ansible for this?
The primary area where Puppet, Chef and Ansible focuses is on configuration inside the instances (ie: Your application and server specific configs). Although there are modules available to use these configuration management tools to manage some of the infrastructure stuff, the original intent behind their creation was application configuration inside the operating system.
Being said that, you can still use these configuration management tools along with terraform to configure things inside the VMs as we will use in this tutorial(basically these tools can be used by terraform as a provisioner to configure applications inside your infrastructure).
Apart from this, if you are using Docker containers for running your applications, the containers are self sufficient and will have all the required configuration packed into it for your application. In this case, the need of a configuration management tool like chef or puppet is not that much. But you still need something to manage your infrastructure with code. Because the containers will ultimately run on top of a server/vm in a cloud infra. Terraform can step in and create your required infra for your containers to run on.
Lets not deny the fact that all these tools(chef, puppet, ansible etc) can be used as IaC(Infrastructure as Code) as well. But terraform is well suited for this purpose as it maintains the state of the infrastructure.
Tutorial focus
- Jenkins will trigger tests/deployment/provisioning as CI/CD
- **I suggest to add test steps via Gerrit Pull request which runs terraform validate/plan steps before apply commands
- Packer will create our production grade baseimage OS and save AWS AMI
- Terraform will provision AWS VPC and deploy infra IAC
- Ansible will deploy/test application on EC2 instance
Pull and run latest Jenkins Blueocean docker image:
sudo docker run -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkinshome — restart unless-stopped jenkinsci/blueocean:latest
Jenkins Configuration
- Install suggested plugins
- Add awsCred, repoCred credentials for AWS Account and Github account.
- AWS user must have EC2FullAccess rights.
- Create new pipeline and use https://github.com/repository repo as Jenkinsfile source.
Required:
yourdockerhub/agent-image for docker agent (git, bash, jdk, packer, terraform preinstalled) or build your own image with below dockerfile and push to registry.
Dockerfile for jenkins agent
FROM alpineENV LANG=C.UTF-8 \
JAVA_VERSION=8 \
JAVA_UPDATE=171 \
JAVA_BUILD=11 \
JAVA_PATH=512cd62ec5174c3487ac17c61aaa89e8 \
JAVA_HOME=”/usr/lib/jvm/default-jvm”# Here we install GNU libc (aka glibc) and set C.UTF-8 locale as default.RUN ALPINE_GLIBC_BASE_URL=”https://github.com/sgerrand/alpine-pkg-glibc/releases/download" && \
ALPINE_GLIBC_PACKAGE_VERSION=”2.27-r0" && \
ALPINE_GLIBC_BASE_PACKAGE_FILENAME=”glibc-$ALPINE_GLIBC_PACKAGE_VERSION.apk” && \
ALPINE_GLIBC_BIN_PACKAGE_FILENAME=”glibc-bin-$ALPINE_GLIBC_PACKAGE_VERSION.apk” && \
ALPINE_GLIBC_I18N_PACKAGE_FILENAME=”glibc-i18n-$ALPINE_GLIBC_PACKAGE_VERSION.apk” && \
apk add — no-cache — virtual=.build-dependencies wget ca-certificates && \
wget \
“https://raw.githubusercontent.com/sgerrand/alpine-pkg-glibc/master/sgerrand.rsa.pub" \
-O “/etc/apk/keys/sgerrand.rsa.pub” && \
wget \
“$ALPINE_GLIBC_BASE_URL/$ALPINE_GLIBC_PACKAGE_VERSION/$ALPINE_GLIBC_BASE_PACKAGE_FILENAME” \
“$ALPINE_GLIBC_BASE_URL/$ALPINE_GLIBC_PACKAGE_VERSION/$ALPINE_GLIBC_BIN_PACKAGE_FILENAME” \
“$ALPINE_GLIBC_BASE_URL/$ALPINE_GLIBC_PACKAGE_VERSION/$ALPINE_GLIBC_I18N_PACKAGE_FILENAME” && \
apk add — no-cache \
“$ALPINE_GLIBC_BASE_PACKAGE_FILENAME” \
“$ALPINE_GLIBC_BIN_PACKAGE_FILENAME” \
“$ALPINE_GLIBC_I18N_PACKAGE_FILENAME” && \
\
rm “/etc/apk/keys/sgerrand.rsa.pub” && \
/usr/glibc-compat/bin/localedef — force — inputfile POSIX — charmap UTF-8 “$LANG” || true && \
echo “export LANG=$LANG” > /etc/profile.d/locale.sh && \
\
apk del glibc-i18n && \
\
rm “/root/.wget-hsts” && \
apk del .build-dependencies && \
rm \
“$ALPINE_GLIBC_BASE_PACKAGE_FILENAME” \
“$ALPINE_GLIBC_BIN_PACKAGE_FILENAME” \
“$ALPINE_GLIBC_I18N_PACKAGE_FILENAME”RUN apk add — no-cache — virtual=build-dependencies wget ca-certificates unzip && \
cd “/tmp” && \
wget — header “Cookie: oraclelicense=accept-securebackup-cookie;” \
“http://download.oracle.com/otn-pub/java/jdk/${JAVA_VERSION}u${JAVA_UPDATE}-b${JAVA_BUILD}/${JAVA_PATH}/jdk-${JAVA_VERSION}u${JAVA_UPDATE}-linux-x64.tar.gz" && \
tar -xzf “jdk-${JAVA_VERSION}u${JAVA_UPDATE}-linux-x64.tar.gz” && \
mkdir -p “/usr/lib/jvm” && \
mv “/tmp/jdk1.${JAVA_VERSION}.0_${JAVA_UPDATE}” “/usr/lib/jvm/java-${JAVA_VERSION}-oracle” && \
ln -s “java-${JAVA_VERSION}-oracle” “$JAVA_HOME” && \
ln -s “$JAVA_HOME/bin/”* “/usr/bin/” && \
rm -rf “$JAVA_HOME/”*src.zip && \
wget — header “Cookie: oraclelicense=accept-securebackup-cookie;” \
“http://download.oracle.com/otn-pub/java/jce/${JAVA_VERSION}/jce_policy-${JAVA_VERSION}.zip" && \
unzip -jo -d “${JAVA_HOME}/jre/lib/security” “jce_policy-${JAVA_VERSION}.zip” && \
rm “${JAVA_HOME}/jre/lib/security/README.txt” && \
apk del build-dependencies && \
rm “/tmp/”*
RUN mkdir /root/packerWORKDIR /root/packerRUN wget https://releases.hashicorp.com/packer/1.2.4/packer_1.2.4_linux_amd64.zip
RUN wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zipRUN apk updateRUN unzip packer_1.2.4_linux_amd64.zip
RUN unzip terraform_0.11.7_linux_amd64.zip
RUN mv packer /usr/local/bin/packer
RUN mv terraform /usr/local/bin/terraform
RUN rm packer_1.2.4_linux_amd64.zip
RUN rm terraform_0.11.7_linux_amd64.zipRUN apk update && apk upgrade && \
apk add — no-cache bash git openssh
Jenkins job pulls docker image and runs commands in it.
Pipeline has 3 stages for create AWS Environment;
Checkout SCM repository
Jenkinsfile
pipeline {
agent {
docker {
image 'yourdockerhub/agent-image:latest'
}
}
stages {
stage('Create Packer AMI') {
steps {
withCredentials([
usernamePassword(credentialsId: 'awsCred', passwordVariable: 'AWS_SECRET', usernameVariable: 'AWS_KEY')
]) {
sh 'packer build -debug -var aws_access_key=${AWS_KEY} -var aws_secret_key=${AWS_SECRET} packer/packer.json'
}
}
}
stage('AWS Deployment') {
steps {
withCredentials([
usernamePassword(credentialsId: 'awsCred', passwordVariable: 'AWS_SECRET', usernameVariable: 'AWS_KEY'),
usernamePassword(credentialsId: 'repoCred', passwordVariable: 'REPO_PASS', usernameVariable: 'REPO_USER'),
]) {
sh 'rm -rf repository'
sh 'git clone https://github.com/suhasulun/repository.git'
sh '''
cd repository
terraform init
terraform apply -auto-approve -var access_key=${AWS_KEY} -var secret_key=${AWS_SECRET}
git add terraform.tfstate
git -c user.name="Suha Sulun" -c user.email="suha.sulun@test.com" commit -m "terraform state update from Jenkins"
git push @github.com/suhasulun/repository.git">https://${REPO_USER}:${REPO_PASS}@github.com/suhasulun/repository.git master
'''
}
}
}
}
}
Create EC2 AMI in AWS with Packer and create ELB, ASG, LC, SG, AZ with Terraform and commit AWS Environment state to repository repo.
Packer; (awsCred should be set in Jenkins Credentials properly)
Provisions Unix machine from Europe Region and perform below task on AMI;
· Install suitable docker version for OS
· Install Ansible
· Download SpringBoot artifact
· Build docker image and tag as springboot/app:latest
· Run latest springboot/app latest image on machine
· Run ansible-playbook (install git, nginx) Test Docker image is running
· Save AMI in AWS as name like prod-image*
Packer.json
{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},"provisioners": [
{
"type": "shell",
"execute_command": "echo 'admin' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'",
"inline": [
"sleep 30",
"apt-add-repository ppa:ansible/ansible -y",
"/usr/bin/apt-get update",
"/usr/bin/apt-get -y install ansible",
"mkdir /home/debian/app",
"chown admin:admin /home/debian/app"
]
},"_comment": "Install docker",
{
"type": "shell",
"script": "install.sh",
"pause_before": "5s"
},"_comment": "Build latest dockerimage for application",
{
"type": "shell",
"script": "packer/app.sh",
"pause_before": "5s"
},
{
"type": "file",
"source": ".",
"destination": "/home/debian/app/"
},"_comment": "Run playbook for deployment steps"
{
"type": "ansible-local",
"playbook_file": "packer/ansible-playbook.yml"
}
],"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "eu-west-2",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "debian/images/*debian-strecth-9.5-*",
"root-device-type": "ebs"
},
"owners": ["099720109477"],
"most_recent": true
},
"tags": {
"OS_Version": "Debian",
"Release": "Latest",
"Runner": "EC2",
"Name": "Packer Baked AMI"
},
"instance_type": "t2.small",
"ssh_username": "admin",
"ami_name": "prod-image {{timestamp}}",
"launch_block_device_mappings": [{
"device_name": "/dev/sda1",
"volume_size": 8,
"volume_type": "gp2",
"delete_on_termination": true
}]
}]
}
Terraform; (awsCred should be set in Jenkins Credentials properly)
Create AWS Resources and Deployment
· Provisions latest prod-image* AMI
· Apply Launch Configuration settings with given Security Group port settings
· Apply Auto Scale Group definitions min:2 to max:5 instances
· AMIs’ attach behind Elastic Load Balancer
· Spin cross in All Availibility Zones
· Commits Env state to https://github.com/suhasulun/repository repo
· Deployment is done
· Lifecycle hook is set as create before destroy enabled
*You may choose to use .tfvars file to read variables from local and ofc not in version control system or just use vault and invoke for variable&credential reading.
terraform apply \
-var-file="secret.tfvars" \
-var-file="production.tfvars"
Terraformfile
variable "access_key" {}
variable "secret_key" {}provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "eu-west-2"
}
data "aws_ami" "sp_app_ami" {
most_recent = truefilter {
name = "name"
values = ["prod-image*"]
}filter {
name = "virtualization-type"
values = ["hvm"]
}owners = ["684695514035"]
}resource "aws_launch_configuration" "sp_app_lc" {
image_id = "${data.aws_ami.sp_app_ami.id}"
instance_type = "t2.micro"
security_groups = ["${aws_security_group.sp_app_websg.id}"]
lifecycle {
create_before_destroy = true
}
}resource "aws_autoscaling_group" "sp_app_asg" {
name = "terraform-asg-springboot-app-${aws_launch_configuration.sp_app_lc.name}"
launch_configuration = "${aws_launch_configuration.sp_app_lc.name}"
availability_zones = ["${data.aws_availability_zones.allzones.names}"]
min_size = 2
max_size = 5load_balancers = ["${aws_elb.elb1.id}"]
health_check_type = "ELB"lifecycle {
create_before_destroy = true
}
}resource "aws_security_group" "sp_app_websg" {
name = "security_group_for_sp_app_websg"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}lifecycle {
create_before_destroy = true
}
}resource "aws_security_group" "elbsg" {
name = "security_group_for_elb"
ingress {
from_port = 80
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}lifecycle {
create_before_destroy = true
}
}data "aws_availability_zones" "allzones" {}resource "aws_elb" "elb1" {
name = "terraform-elb-springboot-app"
availability_zones = ["${data.aws_availability_zones.allzones.names}"]
security_groups = ["${aws_security_group.elbsg.id}"]
listener {
instance_port = 8080
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:8080/"
interval = 30
}cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400tags {
Name = "terraform - elb - springboot-app"
}
}
Check your infra running from ELB DNS_Name A Record as loadbalanced