Creating CloudFront Distribution in AWS using Terraform
CloudFront is widely used service provided by AWS. CloudFront uses edge locations to load additional website data. To explain this further, consider this example. Suppose you launch your website using EC2. So normally you would store all the website code as well as website data(i.e images, videos etc.) in the same EC2 machine. But this is not well optimized setup and latency is high. This is because EC2 is a regional service and can’t use edge locations. What we can do instead is create a S3 bucket and store all the website data on it. S3 is a global service so we are using CloudFront for S3 we can use edge locations to load our website data fast with reduced latency.
We will create the setup along the lines of this setup. We will Terraform to do so. Terraform is used to automate the AWS process.
A brief overview of what this article achieves.
- Create the key and security group which allow the port 80.
- Launch EC2 instance.
- In this EC2 instance use the key and security group which we have created in step 1.
- Launch one Volume (EBS) and mount that volume into /var/www/html
- Developer have uploded the code into github repo also the repo has some images.
- Copy the github repo code into /var/www/html
- Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
- Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
In the following steps I created my own security group. Then I created an EC2 instance and installed httpd server on it. This EC2 will act as our web server. I created one more EBS. We will mount /var/www/html folder to this EBS. Mounting ensures that even if our server(i.e EC2 fails the website code is safe). We then clone our GitHub code to /var/www/html folder.
Specifying the provider
Specifier in our case is AWS. Specifying the region, ap-south-1 is Mumbai.
provider "aws" {
region = "ap-south-1"
}
Creating a Security allowing traffic to port number 80
The following code is used to create security group.
resource "aws_security_group" "allow_tls" {
name = "Security_01"
description = "Allow SSH & HTTP inbound traffic"ingress {
description = "allowing HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "allowing SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}tags = {
Name = "Security_01"
}
}output "sec_op"{
value = aws_security_group.allow_tls.name
}
Launching Ec2 instance using security group we created
resource "aws_instance" "web" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "d"
security_groups =[ "Security_01" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/asus/Desktop/d.pem")
host = aws_instance.web.public_ip
}provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl start httpd",
"sudo systemctl enable httpd",
]
}tags = {
Name = "project_os"
}}
Creating EBS volume and attaching it to Ec2 instance
resource "aws_ebs_volume" "example" {
availability_zone = aws_instance.web.availability_zone
size = 1tags = {
Name = "project_vol"
}
}output "ebs_vol"{
value =aws_ebs_volume.example
}// 6. attaching the volume.resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdh"
volume_id = "${aws_ebs_volume.example.id}"
instance_id = "${aws_instance.web.id}"
force_detach= true
}
Configuring EBS Volume and cloning from GitHub to /var/www/html
resource "null_resource" "remote_01" {
depends_on = [
aws_volume_attachment.ebs_att,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/asus/Desktop/d.pem")
host = aws_instance.web.public_ip
}provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/web-crawler/terra.git /var/www/html "
]
}}
Now in the following I will show how I created S3 bucket and CloudFront.
Creating S3 and uploading additional website data into it.
In this case website data is a local image which I am uploading to S3.
resource "aws_s3_bucket" "bucket1w2w2w2w2w2" {
acl = "public-read"
versioning {
enabled=true
}
}resource "aws_s3_bucket_object" "bucket1" {
bucket = aws_s3_bucket.bucket1w2w2w2w2w2.bucket
key = "mypic"
acl = "public-read"
source="C:/Users/asus/Desktop/web.png"
etag = filemd5("C:/Users/asus/Desktop/web.png")tags = {
Name = "My bucket"
Environment = "Dev"
}
}
Creating CloudFront Distribution for our S3 bucket
resource "aws_cloudfront_distribution" "s3_distribution" {
depends_on = [
null_resource.remote_01,
]
origin {
domain_name = "${aws_s3_bucket.bucket1w2w2w2w2w2.bucket_regional_domain_name}"
origin_id = "my_first_origin"}enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_root_object = "mypic"default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "my_first_origin"forwarded_values {
query_string = falsecookies {
forward = "none"
}
}viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/content/immutable/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = "my_first_origin"forwarded_values {
query_string = false
headers = ["Origin"]cookies {
forward = "none"
}
}min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "allow-all"
}# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "my_first_origin"forwarded_values {
query_string = falsecookies {
forward = "none"
}
}min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = "redirect-to-https"
}price_class = "PriceClass_200"restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["US", "CA", "GB", "IN"]
}
}tags = {
Environment = "production"
}viewer_certificate {
cloudfront_default_certificate = true
}
Updating CloudFront URL in our web server
Our web server is hosted on EC2. So I updated this URL to our main code.
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/asus/Desktop/d.pem")
host = aws_instance.web.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo su << EOF",
"echo \"<center><img src='http://${self.domain_name}/${aws_s3_bucket_object.bucket1.key}' height='400px' width='400px'></center>\" >> /var/www/html/index.php",
"EOF"
]
}}
Running the code
Save this file with .tf extension as this a terraform file. Now we will run this code on Command Prompt.
terraform init
terraform apply -auto-approve
Now whole of the setup will be created using this single file.
We can delete the whole setup created using a single command.
terraform destroy
Snaps of the AWS services created by Terraform
This above task automates the process of setting up the web server automatically along with setting up CloudFront distribution. Any additional views and improvements are welcome.