AWS Infrastructure with EFS using Terraform

Vedant Shrivastava
Mozilla Club Bbsr
Published in
6 min readAug 5, 2020
AWS Services

Introduction:

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed Elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications.

AWS EFS

Amazon Elastic File System (Amazon EFS) provides a simple, scaleable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA). EFS IA provides price/performance that’s cost-optimized for files not accessed every day. By simply enabling EFS Lifecycle Management on your file system, files not accessed according to the lifecycle policy you choose will be automatically and transparently moved into EFS IA. The EFS IA storage class costs only $0.025/GB-month*.

While workload patterns vary, customers typically find that 80% of files are infrequently accessed (and suitable for EFS IA), and 20% are actively used (suitable for EFS Standard), resulting in an effective storage cost as low as $0.08/GB-month*. Amazon EFS transparently serves files from both storage classes in a common file system namespace.

Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

Setup :

  • Create the key and security group which allow the port 80.
  • Launch EC2 instance.
  • In this EC2 instance use the key and security group which we have created in step 1.
  • Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
  • Developer have uploaded the code into github repo also the repo has some images.
  • Copy the GitHub repo code into /var/www/html
  • Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  • Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Lets get started:

This terraform code which will create the full infrastructure from crating Instance, VPC to Security groups and so on, annd EFS will also be created using terraform code which will be furter mounted to the Instances so that the data remains persistent even if somehow the instance fails the data within never gets corrupted.

provider "aws" {region = "ap-south-1"profile = "Vedant-S"}
resource "aws_vpc" "my_vpc" { cidr_block = "192.168.0.0/16" instance_tenancy = "default" enable_dns_hostnames = "true" tags = { Name = "myvpc2" }}
resource "aws_subnet" "vpc_subnet" { vpc_id = aws_vpc.my_vpc.id cidr_block = "192.168.0.0/24" availability_zone = "ap-south-1a" map_public_ip_on_launch = "true" tags = { Name = "subnet_1"
}
}
  • Creation of Security Groups.
//Creation of Security-Groupsresource "aws_security_group" "security" {name = "firewall-NFS"vpc_id = aws_vpc.my_vpc.iddescription = "allow NFS"ingress {description = "NFS"from_port = 2049to_port = 2049protocol = "tcp"cidr_blocks = [ "0.0.0.0/0" ]}ingress {description = "HTTP"from_port = 80to_port = 80protocol = "tcp"cidr_blocks = [ "0.0.0.0/0" ]}ingress {description = "SSH"from_port = 22to_port = 22protocol = "tcp"cidr_blocks = [ "0.0.0.0/0" ]}egress {from_port= 0to_port = 0protocol = "-1"cidr_blocks = [ "0.0.0.0/0" ]}tags = {Name = "firewall_nfs"}
}
resource "aws_efs_file_system" "new_efs" {creation_token = "efs"tags = {Name = "efs_vloume"}}resource "aws_efs_mount_target" "mount_EFS" {file_system_id = aws_efs_file_system.new_efs.idsubnet_id = aws_subnet.vpc_subnet.idsecurity_groups = [ aws_security_group.security.id ]}resource "aws_internet_gateway" "inter_gateway" {vpc_id = aws_vpc.my_vpc.idtags = {Name = "my_ig"
}
}
resource "aws_route_table" "rt_tb" {vpc_id = aws_vpc.my_vpc.idroute {

gateway_id = aws_internet_gateway.inter_gateway.id
cidr_block = "0.0.0.0/0"
}
tags = {Name = "myroute-table"
}
}
resource "aws_route_table_association" "rt_associate" {subnet_id = aws_subnet.vpc_subnet.idroute_table_id = aws_route_table.rt_tb.id
}
resource "aws_instance" "e2-instance" {depends_on = [ aws_efs_mount_target.mount_EFS ]ami = "ami-0447a12f28fddb066"instance_type = "t2.micro"key_name = "mykey123"subnet_id = aws_subnet.vpc_subnet.idvpc_security_group_ids = [ aws_security_group.security.id ]user_data = <<-EOF#! /bin/bashsudo su - rootsudo yum install httpd -ysudo service httpd startsudo service httpd enablesudo yum install git -ysudo yum install -y amazon-efs-utilssudo mount -t efs "${aws_efs_file_system.new_efs.id}":/ /var/www/htmlmkfs.ext4 /dev/sdfmount /dev/sdf /var/www/htmlcd /var/www/html

git clone https://github.com/Vedant-S/HybridCloud.git

EOF
}
  • Now we will create a S3 bucket and a pipeline which will take the data from the github repository and the code-pipeline will further transfer that data directly into the S3 bucket.
resource "aws_s3_bucket" "b1" {
bucket = "svedant"
acl = "public-read"
tags = {
Name = "My bucket"
}
}
//Block Public Access
resource "aws_s3_bucket_public_access_block" "s3block" {bucket = aws_s3_bucket.b1.id
block_public_policy = true
}
locals {
s3_origin_id = "S3-${aws_s3_bucket.b1.bucket}"
}
//Creation Of CloudFrontresource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
comment = "bucket_kapil"
}
resource "aws_cloudfront_distribution" "cloudfront" {
origin {
domain_name = aws_s3_bucket.b1.bucket_regional_domain_name
origin_id = local.s3_origin_id

s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
}
}
enabled = true
is_ipv6_enabled = true
comment = "access"
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
# Forward all query strings, cookies and headers
forwarded_values {
query_string = false

cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/content/immutable/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
headers = ["Origin"]
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = "redirect-to-https"
}
# Restricts who is able to access this content
restrictions {
geo_restriction {
# type of restriction, blacklist, whitelist or none
restriction_type = "none"
}
}
# SSL certificate for the service.
viewer_certificate {
cloudfront_default_certificate = true
}
retain_on_delete = true
}resource "aws_codepipeline" "codepipeline" {
name = "Vedant"
role_arn = "arn:aws:iam::947616636647:role/service-role/AWSCodePipelineServiceRole-ap-south-1-taskpipeline"
artifact_store {
location = "${aws_s3_bucket.b1.bucket}"
type = "S3"
}

stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = ["SourceArtifacts"]
configuration = {
Owner = "Vedant-S"
Repo = "HybridCloud"
Branch = "master"
OAuthToken = "ad83436d8d938ae9fb91536497c039dda39580et"
}
}
}
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "S3"
version = "1"
input_artifacts = ["SourceArtifacts"]
configuration = {
BucketName = "${aws_s3_bucket.b1.bucket}"
Extract = "true"
}

}
}
  • Now we will run our terraform code by terraform apply.

Conclusion:

We can see the github repository id directly been mounted in the httpd folder and all EFS is working smoothly at this point, we can do is we can create another instance of some link and directly mount the same EFS to it and then manually we can do changes in the EFS which will also change the content of the other instance directory in which the EFS has been mounted.

You can also reach out on my Linkedin, Twitter, Instagram, or Facebook in case you need more help, I would be delighted to solve queries.

If you have come up to this, do drop and 👏 if you liked this article.

Good Luck and Happy Coding.

--

--