Designing High Availability Architecture with S3 and CloudFront

Sriw World of Coding
Analytics Vidhya
Published in
4 min readNov 3, 2020

AWS Cloudfront

CloudFront is a CDN (Content Delivery Network). It retrieves data from Amazon S3 bucket and distributes it to multiple datacenter locations. It delivers the data through a network of data centers called edge locations. The nearest edge location is routed when the user requests for data, resulting in lowest latency, low network traffic, fast access to data, etc.

How AWS CloudFront Delivers the Content?

AWS CloudFront delivers the content in the following steps.

Step 1 The user accesses a website and requests an object to download like an image file.

Step 2 DNS routes your request to the nearest CloudFront edge location to serve the user request.

Step 3 At edge location, CloudFront checks its cache for the requested files. If found, then returns it to the user otherwise does the following −

Step 4 The object is now in an edge cache for 24 hours or for the provided duration in file headers. This is also known as TTL (Time To Live)

Prerequisite:

  • Created an AWS account.
  • Install AWS CLI .
  • Configure AWS CLI with IAM user.

For more information regarding Launching EC2 instances using CLI and attaching EBS volume , you can refer to my blog.

In above blog I have used AWS Educate where facility of IAM and CloudFront is not available .

Problem Statement:

  • Webserver configured on EC2 Instance
  • Document Root(/var/www/html) made persistent by mounting on EBS Block Device.
  • Static objects used in code such as pictures stored in S3
  • Setting up the Content Delivery Network using CloudFront and using the origin domain as an S3 bucket.
  • Finally, place the Cloud Front URL on the web app code for security and low latency.

Launched an EC2 Instance

(Refer to my above blog)

Created an EBS Volume and attached to the running EC2 Instance

(Refer to my above blog)

Log into your EC2 Instance

  • Login to your root account
sudo su - root
  • Install httpd web server
  • Type fdisk -l to check 2 GB volume is added
fdisk -l
  • Create Partition
  • Format the partition and mount to /var/www/html folder
  • Now you can see 1 more partition of 2 GB mounted on /var/www/html
  • Create a S3 bucket with unique name
S3 bucket created
  • Copy the content to s3 bucket with only read access
me1.jpeg successfully copied
Public url to access the image
  • I have wrote a simple index.html inside folder /var/ww/html folder containing a simple header and image tag
  • Now copy the public URL of EC2 instance and access the web page

Now setting up Content Delivery Network using CloudFront and using the origin domain as an S3 bucket.

  • Create a Content Deliver Network
aws cloudfront create-distribution --origin-domain-name <bucket name>.s3.amazonaws.comaws cloudfront create-distribution --origin-domain-name bucket-webserver-03091999.s3.amazonaws.com
  • Go to the CloudFront section of AWS Management Consle you will see the Domain Name .
  • Copy the Domain Name and replace with the S3 bucket url configured earlier.
  • Now again access the image

I am excited to announce the launch of my new Udemy course, “Apache Airflow Bootcamp: Hands-On Workflow Automation.” This comprehensive course is designed to help you master the fundamentals and advanced concepts of Apache Airflow through practical, hands-on exercises.

You can enroll in the course using the following link: [Enroll in Apache Airflow Bootcamp](https://www.udemy.com/course/apache-airflow-bootcamp-hands-on-workflow-automation/?referralCode=F4A9110415714B18E7B5).

I would greatly appreciate it if you could take the time to review the course and share your feedback. Additionally, please consider sharing this course with your colleagues who may benefit from it.

--

--