What is Terraform?
Terraform is an open-source “Infrastructure as Code” tool, created by HashiCorp.A declarative coding tool, Terraform enables developers to use a high-level configuration language called HCL (HashiCorp Configuration Language) to describe the desired “end-state” cloud or on-premises infrastructure for running an application. It then generates a plan for reaching that end-state and executes the plan to provide the infrastructure.
Why Terraform when we can do it another way also?
There are a few key reasons developers choose to use Terraform over other Infrastructure as Code tools:
- Open source: Terraform is backed by large communities of contributors who build plugins to the platform. Regardless of which cloud provider you use, it’s easy to find plugins, extensions, and professional support. This also means Terraform evolves quickly, with new benefits and improvements added consistently.
- Platform agnostic: Meaning you can use it with any cloud services provider. Most other IaC tools are designed to work with a single cloud provider.
- Immutable infrastructure: Most Infrastructure as Code tools create mutable infrastructure, meaning the infrastructure can change to accommodate changes such as a middleware upgrade or new storage server.
In this task, we are using EFS as storage instead of using EBS. Now what is EFS and why are we using it?
Amazon EFS provides file storage for your Amazon EC2 instances. With Amazon EFS, you can create a file system, mount the file system on your EC2 instances, and then read and write data from your EC2 instances to and from your file system.
Now how does an EFS works ??? To have a clear picture lets have a look at the flowchart below:
Let’s start with the task now:
1. Create a Security group which allow the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
5. The developer has uploaded the code into GitHub repo also the repo has some images.
6. Copy the Github repo code into /var/www/html
7. Create a S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.
8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
1.Declare provider
provider "aws" {
region = "ap-south-1"
profile = "ankita127singh"}
These commands provide access for the terraform to access my AWS account services where the credentials are saved in the profile named “ankita127singh”.
2.Create a key pair
resource "aws_key_pair" "mykey" {
key_name = "mykey"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAwFFgBW4DqK7RatWO7qX8sALDSnLZ/aGGUNeHcguZVALJHVmyKzKDe4R9aEgQvHleQLGffD3YWtMBVJiBPIdMg/HFze9tiEO5ALoZH4UoCvLGW0zXtTHfJDEYK0pmIxp19XhbzKCOkVUcbDtuIFluDg1Rk6ADR9/j6Gcr8Z3nz6+6DfXxWyl9Igu/Bct3S73ZkjNvVAiN0w6d9M3n4VX7CewaftJzCiYJ/c7mpnpod+6cfNumGV0KTkLoMlnCa98eCyp4m4gAi6BjpUHJJnIJflP1m7M0+YwiZrvUEfjU4qJIEgCvt9HW4ja3BijNqgP5i/ywO4c2c15zz3XWqiyqMw== rsa-key-20200614"
}
In the above commands, the key named “mykey” is created. This key pair will be used to launch our AWS instance.
3.Create a Security Group
resource "aws_security_group" "sg" {
name = "sg"
description = "Allow TLS inbound traffic"
vpc_id = "vpc-6de2ff05"
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "sg"
}
}
In the above code, the security group named “sg” is created.The security group will have inbound rule “SSH” for connection and “http” for connecting to the webserver.
4.Launch and mount EFS
resource "aws_efs_file_system" "efs1" {
depends_on = [
aws_security_group.sc7,
]
creation_token = "TEFS_1"
tags = {
Name = "TEFS_1"
}
}
resource "aws_efs_mount_target" "mt1" {
depends_on = [
aws_efs_file_system.efs1,
]
file_system_id = aws_efs_file_system.efs1.id
subnet_id = "subnet-0258c7030e68c2273"
security_groups = [aws_security_group.sc7.id]
}
For creating an efs file system we need a block “aws_efs_file_system”. A depends on the block is also incorporated here which specifies that the efs file should only be created after the creation of security groups. This block maintains a proper serial for the execution of all the blocks. After the creation of the efs file we need to mount it to the mount target. For that, we use a block named “aws_efs_mount_target” which will depend on efs file creation. This block specifies the subnet_id and the security group of the mount target.
5.Create an S3 bucket and update its policies
resource "aws_s3_bucket" "sb4" {
bucket = "s4bucket918-72"
acl = "private"
tags = {
Name = "Terra-bucket"
Environment = "Dev"
}
}resource "aws_s3_bucket_public_access_block" "sbb" {
depends_on = [
aws_s3_bucket.sb4,
]
bucket = aws_s3_bucket.sb4.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
locals {
s3_origin_id = aws_s3_bucket.sb4.bucket}resource "aws_s3_bucket_object" "sbo" {
depends_on = [
aws_s3_bucket_policy.sbps,
]
bucket = aws_s3_bucket.sb4.id
key = "s3upload2.jpg"
source = "/Users/KIIT/Desktop/image.jpg"
content_type = "image/jpeg"
content_disposition = "inline"
}resource "aws_s3_bucket_policy" "sbps" {
depends_on = [
aws_s3_bucket_public_access_block.sbb,
]
bucket = aws_s3_bucket.sb4.id
policy = <<EOF
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3NRGG0HS1Z25F"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${aws_s3_bucket.sb4.bucket}/*"
}
]
}
EOF}
The S3 bucket is created and the image is deployed into that bucket by providing the source location. In the next step, we are making bucket private to only allow CloudFront to access it Next, we need to update the bucket policy under which it mentions that CloudFront is allowed to access the bucket object.
6.Create a Cloudfront
resource "aws_cloudfront_distribution" "sbc" {
depends_on = [
aws_s3_bucket_object.sbo,
]
origin {
domain_name = "${aws_s3_bucket.sb4.bucket}.s3.amazonaws.com"
origin_id = aws_s3_bucket.sb4.bucket
s3_origin_config {
origin_access_identity = "origin-access-identity/cloudfront/ETBJK5YQVCXVO"
}
}
enabled = true
is_ipv6_enabled = true
comment = ""
default_root_object = ""
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = aws_s3_bucket.sb4.bucket
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
}
price_class = "PriceClass_All"
restrictions {
geo_restriction {
restriction_type = "none"
locations = []
}
}
tags = {
Environment = "TerraCloud"
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
For the creation of CloudFront, we need a resource named aws_cloudfront_distribution. Under this, we provide the information about the bucket from which the object needs to be taken. Also we provide the time limit we want out objects to be kept at various edge locations under max_ttl.
7.Create an instance
resource "aws_instance" "i2" {
depends_on = [
aws_efs_mount_target.mt3,
aws_cloudfront_distribution.sbc,
]
ami = "ami-005956c5f0f757d37"
instance_type = "t2.micro"
key_name = "TK3"
security_groups = [ "sc7" ]
availability_zone = "ap-south-1a"
user_data = <<-EOF
#!/bin/bash
sudo yum install -y amazon-efs-utils
sudo yum install -y nfs-utils
file_system_id_1="${aws_efs_file_system.efs1.id}"
mkdir /var/www
mkdir /var/www/html
mount -t efs $file_system_id_1:/ /var/www/html
echo $file_system_id_1:/ /var/www/html efs defaults,_netdev 0 0 >> /etc/fstab
EOF
tags = {
Name = "Terraos_2"
}
}
In this block we will give all the information about an instance i.e its ami, instance type, the key and the security group created in the first step and its availability zone./bin/bash is the most common shell used as the default shell for user login of the Linux system. We provide all the commands here and whenever the system reboots these commands run and hence it reduces our effort to run the same task or commands again and again.
8.Deploying our code from Github
resource "null_resource" "nullresource" {
depends_on = [
aws_instance.i2,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/KIIT/Desktop/mykey.pem")
host = aws_instance.i2.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo service httpd start",
"sudo service httpd enabled",
"sudo rm -rf /var/www/html",
"sudo git clone https://github.com/vishal_jha/lwcloudtask /var/www/html",
"sudo sed -i 's/old_domain/${aws_cloudfront_distribution.sbc.domain_name}/g' /var/www/html/file.html"
]
}
}
In the above code, the volume created is first formatted and then the persistent ebs volume is mounted to /var/www/html/. After that, the image in the git repository is deployed to /var/www/html.
9.Connection to the webserver
resource "null_resource" "nulllocal1" {
depends_on = [
null_resource.nullremote3,
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.i2.public_ip}/file.html"
}
}
Lastly, we have created a code to run our web server automatically as soon as all the services our successfully run. Here, we use chrome as our browser and our instances IP to launch our web server.
THE WEBSITE IS FINALLY LAUNCHED USING THE AUTOMATION THROUGH TERRAFORM