Terraform, AWS, and Cloud Automation.🐱‍🏍

Vinuja Khatode
10 min readJun 13, 2020

--

Considering you are here, you must be having a slight idea about Terraform and its uses. So, if you want to build something good using it, you’ve landed on the right page.😉

It is always good, to begin with the basics.😬

Terraform is an intelligent tool for building, changing, and versioning infrastructure safely and efficiently. The simplest explanation of Terraform can be seen in the image below. We just have to write the Infrastructure as Code (IAC) in the configuration files having .tf as an extension to build the whole infrastructure in just one click!

For this practical, we will need Terraform setup, Account on AWS Console, AWS CLI setup, and little curiosity to learn something new.😄 Here we are launching an infrastructure to deploy websites.

Automating the Infrastructure setup using Terraform and AWS Cloud.

Here is the detailed problem statement:

  1. Create the key and security group which allows the port 80 and 22.

2. Launch EC2 instance. In this EC2 instance use the key and security group created above.
3. Launch one Volume (EBS) and mount that volume into /var/www/html.
4. Copy the GitHub repo code that is uploaded by the developer into /var/www/html
5. Create an S3 bucket, and copy the images from GitHub repo into this s3 bucket and change the permission to public readable.
6. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

  • The first thing to do is, Download the AWS CLI and finish the installation. If done with this installation go the AWS Console, and under services go to the IAM and Add a new user. After adding the user, we will get access credentials, save that CSV file of credentials. Now in command prompt write the command as below: (put “json” in output format)*
configure profile
list profiles
  • Done with the profile configuration!!
  • Now, Download the Terraform and extract it in a folder. Copy the path of the folder wherever you have extracted and add that path to your system’s environment variables.

That's all for the Terraform setup, for confirmation run this command in the command prompt.

Yesss, your Terraform setup is ready!!!

Time to do some serious task now. 😁

Create a workspace for this practical. For reference see the above picture. There you can see the Task1 folder. That is the folder where we will be saving all the files and doing all the work!

Yeah I know that's not the serious work,😂 It starts now👇

  • cd into your workspace, and start notepad by
notepad Task1.tf
  • Here Task1 is the name of the configuration file that we will be using to write code.
  • First, we have to tell Terraform that we are using AWS provider. The way to do this is:
provider "aws"{
region="ap-south-1"
profile = "VinujaKhatode"
}
  • So, the next step is to generate keys and creating security groups using Terraform. For the same, the following code can be used:
//Creating KEYresource "tls_private_key" "tls_key" {
algorithm = "RSA"
}
//Generating Key-Value Pairresource "aws_key_pair" "generated_key" {
key_name = "Task1keyvin"
public_key ="${tls_private_key.tls_key.public_key_openssh}"

depends_on = [
tls_private_key.tls_key
]
}
//Saving Private KEY PEM Fileresource "local_file" "key-file" {
content = "${tls_private_key.tls_key.private_key_pem}"
filename = "Task1keyvin.pem"
depends_on = [
tls_private_key.tls_key,
aws_key_pair.generated_key
]
}
  • The above code will generate the key named Task1keyvin.pem in the same folder.
  • In order to execute and validate, you have to write code till launching the instance at once! So that you won’t have to face the error of key generation.
  • Let's move forward to the Security group. Here we will allow SSH Port and HTTP Port and also the Localhost. Naming this Security group as Task1sec.
//Creating Security Groupresource "aws_security_group" "Task1sec" {
name = "Task1sec"
description = "Security Group for Task1 SSH and HTTPD"
//Adding Rules to Security Group ingress {
description = "SSH Port"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP Port"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "Localhost"
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "Task1sec"
}
}
  • After configuring security groups, the next step is to launch the EC2 instance using the above-created security group and keys. We have to give ami-id and the instance type to launch that instance.
// Creating instance with above created key and security groupresource "aws_instance" "Task1instance" {
ami = "ami-005956c5f0f757d37"
instance_type = "t2.micro"
key_name = "${aws_key_pair.generated_key.key_name}"
security_groups = ["${aws_security_group.Task1sec.name}"]
tags = {
Name = "Task1instance"
}
}
  • If we run this tf file now, this code will launch the instance named Task1instance with the private key Task1keyvin and Task1sec as a Security group.
  • So to run this code, Save the Task1.tf file and Open Command Prompt and give commands as follow:
terraform init
  • You always have to run this command every time you make changes in the configuration file. It installs all the required plugins. After the successful completion of the above command, the next command is:
terraform apply -auto-approve
  • This command will apply all the code written inside the tf file. The attribute -auto-approve is optional. It basically gives approval for further procedures.
Instance, key, and security group created

Hope you are doing good till now! ✨

Okay, now moving towards the main steps.😵

  • In this step, we have to launch one volume and mount it to the var/www/html folder. Here we will create a volume of 1GB.
// Creating new EBS Volume and attachin it to the above created instanceresource "aws_ebs_volume" "ebs1" {
availability_zone = aws_instance.Task1instance.availability_zone
size = 1
tags = {
Name = "Task1ebs"
}
}
// To attach the Volume createdresource "aws_volume_attachment" "ebs_attach" {
device_name = "/dev/sdh"
volume_id = "${aws_ebs_volume.ebs1.id}"
instance_id = "${aws_instance.Task1instance.id}"
force_detach = true
}
// Outputing the IP of the instanceoutput "myos_ip" {
value = aws_instance.Task1instance.public_ip
}

Okay, so in order to use and mount the volume in the instance, we have to remote login to the ec2 instance and then first partition, format, and then mount the volume.

Also, as we are setting a web server, we need the httpd server running in the instance. And some HTML files to host/deploy on the server. So I have a website that I've downloaded from the internet. I uploaded those files to Github and we will pull that files/code and copy it into the var/www/html folder.

My GitHub Repo link:

Add the below code to your configuration file:

// In order to use Volume partition, format nad mounting is necessaryresource "null_resource" "nullremote"  { depends_on = [
aws_volume_attachment.ebs_attach,
aws_security_group.Task1sec,
aws_key_pair.generated_key
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/vinuja khatode/VWorkspace/Terraform/Task1/Task1keyvin.pem")
host = aws_instance.Task1instance.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo service httpd start",
"sudo chkconfig httpd on",
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/vinujakhatode/Webserver-Terraform-AWS.git /var/www/html/"
]
}
}
  • We used a “connection” block to remote log in and provisioner block to execute commands inside the instance. The above code will make partitions, format, and mount the volume to html folder and will also clone the GitHub repository there. It will also install httpd, PHP, and git services.
  • Here is a fact about git command, it only clones into an empty folder, so make our operation successful we are forcefully removing the contents of the HTML folder.

Till now, you can launch an instance with web hosting environment and volume attached to it in just one click!✨

  • So the next step is to create an S3 Bucket:
// Creating S3 bucketresource "aws_s3_bucket" "task1bucketvinuja00vinuja00" {
bucket = "task1bucketvinuja00vinuja00"
acl = "private"
tags = {
Name = "task1bucketvinuja00vinuja00"
}
}
// Allow Public Accessresource "aws_s3_bucket_public_access_block" "S3PublicAccess" {
bucket = "${aws_s3_bucket.task1bucketvinuja00vinuja00.id}"
block_public_acls = true
block_public_policy = true
restrict_public_buckets = true
}

This script will create the S3 bucket named task1bucketvinuja00vinuja00. Make sure whatever name you choose must be in small letters and unique i.e there should not exist any other bucket with the same name. Here we are also allowing public access.

Now we have to fetch the images from S3 using CloudFront. For that, we first have to put images into the S3.

// Uploading files to S3 bucketresource "aws_s3_bucket_object" "bucketObject" {
for_each = fileset("C:/Users/vinuja khatode/Desktop/girly/assets", "**/*.jpg")
bucket = "${aws_s3_bucket.task1bucketvinuja00vinuja00.bucket}"
key = each.value
source = "C:/Users/vinuja khatode/Desktop/girly/assets/${each.value}"
content_type = "image/jpg"
}

The images have been uploaded to the S3 bucket.

//Creating Cloudfront to access images from S3locals {
s3_origin_id = "S3Origin"
}
// Creating Origin Access Identity for CloudFrontresource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
comment = "task1bucketvinuja00vinuja00"
}
resource "aws_cloudfront_distribution" "Task1CF" {origin {
domain_name = "${aws_s3_bucket.task1bucketvinuja00vinuja00.bucket_regional_domain_name}"
origin_id = "${local.s3_origin_id}"
s3_origin_config {
origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"
}
}
enabled = true
is_ipv6_enabled = true
comment = "accessforTask1"
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
// Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/content/immutable/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
headers = ["Origin"]
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
// Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = "redirect-to-https"
}
price_class = "PriceClass_200"
restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["IN"]
}
}
tags = {
Name="Task1CFDistribution"
Environment = "production"
}
viewer_certificate {
cloudfront_default_certificate = true
}
retain_on_delete = true
depends_on=[
aws_s3_bucket.task1bucketvinuja00vinuja00
]
}

This script will create a CloudFront distribution using an S3 bucket. In this bucket, we have stored all of the assets of our site like images, icons, etc. This CloudFront distribution will provide us one URL. By using this URL, we can access the objects inside the bucket.

Here we require that whenever the Infrastructure is destroyed, the CloudFront distribution should not get destroyed. Because, if we create new CloudFront distribution each time, we will be required to change the assets URLs every time. To overcome this problem we will set retain_on_delete to true. This will disable the distribution instead of deleting it when destroying the resource through Terraform.

WOAH! Too much huh? Just a little more to go!🤘

  • Creating Bucket Policy for CloudFront.
// AWS Bucket Policy for CloudFront
data "aws_iam_policy_document" "s3_policy" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.task1bucketvinuja00vinuja00.arn}/*"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
statement {
actions = ["s3:ListBucket"]
resources = ["${aws_s3_bucket.task1bucketvinuja00vinuja00.arn}"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
}
resource "aws_s3_bucket_policy" "s3BucketPolicy" {
bucket = "${aws_s3_bucket.task1bucketvinuja00vinuja00.id}"
policy = "${data.aws_iam_policy_document.s3_policy.json}"
}

This will create a Bucket Policy for the CloudFront.

So, FINALLY, Our code is ready to launch the whole infrastructure in just one click!!!

Why wait then?!🤔

Perform terraform init and terraform apply!!

Let us see the outputs now:

terraform init
terraform apply
S3 bucket
CloudFront

Now, Checking the IP whether the website is hosted or not!

This website is hosted properly and working right!
Inside html folder of instance

Congratulations if your output is similar as that of above!🎇

This automation has enough power that can launch the whole infrastructure in a single click.

Also destroying this infrastructure is pretty easy. A single command and our whole infrastructure will be destroyed permanently.

terraform destroy

If you have hands-on Jenkins and Ansible you can use that to pull the code from Github and for many more purposes from the above code. That would be

DevOps + Cloud = CloudOps

But for now,👇

In this whole process, you may get many errors, but try to solve them, try to debug on your own with patience. In that way, you learn more. Patience is the key. You just have to keep calm and keep doing the work.✨

Okay, Thanks for staying till the end! I hope you learned some new things. Keep Learning! 🤘

I hope this article helped. You can appreciate it by giving us applaud.

You can connect with me on LinkedIn.

--

--