Launch an AWS-EC2 instance using Terraform Code

End to end Automation with a single command

VermaNikita
8 min readJul 6, 2020

AWS : - Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally.

Basically it is one of the Operating System provisioning method.AWS is a comprehensive, easy to use computing platform offered Amazon. The platform is developed with a combination of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings.

Millions of customers — including the fastest-growing startups, largest enterprises, and leading government agencies — are using AWS to lower costs, become more agile, and innovate faster.

Features of using Cloud Platform:-

(i) Easy to use
(ii) Flexible
(iii) Cost-Effective
(iv) Reliable
(v) Scalable and high-performance
(vi) Secure

For more details go on this and advantages of cloud computing are these.Basically Cloud computing is a term referred to storing and accessing data over the internet. It doesn’t store any data on the hard disk of your personal computer. In cloud computing, you can access data from a remote server.

AWS Services:-

There are following AWS services that are used in the following task.

EC2(Elastic Compute Cloud):-

Amazon EC2 is the most used AWS service. It lets users create virtual machines of their own choice of configurations.The word ‘elastic’ in Elastic Compute Cloud talks about the system’s capability of adapting to varying workloads and provisioning or de-provisioning resources according to the demand.It is a machine with an operating system and hardware components of your choice. But the difference is that it is totally virtualized. You can run multiple virtual computers in a single physical hardware.

EBS( Elastic Block Store):-

Amazon EBS is a block storage system used to store persistent data. Amazon EBS is suitable for EC2 instances by providing highly available block level storage volumes.EBS volumes behave like raw, unformatted block devices. You can mount these volumes as devices on your instances. EBS volumes that are attached to an instance are exposed as storage volumes that persist independently from the life of the instance. You can create a file system on top of these volumes, or use them in any way you would use a block device (such as a hard drive). You can dynamically change the configuration of a volume attached to an instance.

S3(Simple Storage Service):-

AWS S3 is a is a highly scalable, fast, and durable solution for object-level storage of any data type.Object Storage allows users to upload files, videos, and documents like you were to upload files, videos, and documents to popular cloud storage products like Dropbox and Google Drive.

CLOUDFRONT:-

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as . html, . css, . js, and image files, to your users.CloudFront is a kind of content delivery network (CDN).By using a CDN, companies can accelerate delivery of files to users over the Internet while also reducing the load on their own infrastructure. CloudFront is AWS’s own CDN solution that integrates with other AWS products, so it’s convenient for companies already running on AWS. CloudFront offers a mature set of content delivery products and has a big network of POPs on many continents. Solid documentation and APIs make CloudFront a developer-friendly CDN.

TERRAFORM:-

Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language (HCL), or optionally JSON.For more deatails go on this.

Jenkins:-

Jenkins is a powerful application that allows continuous integration and continuous delivery of projects, regardless of the platform you are working on. It is a free source that can handle any kind of build or continuous integration. You can integrate Jenkins with a number of testing and deployment technologies. In this tutorial, we would explain how you can use Jenkins to build and test your software projects continuously.

Jenkins is a software that allows continuous integration. Jenkins will be installed on a server where the central build will take place. The following flowchart demonstrates a very simple workflow of how Jenkins works.

Continuous Integration is a development practice that requires developers to integrate code into a shared repository at regular intervals. This concept was meant to remove the problem of finding later occurrence of issues in the build lifecycle. Continuous integration requires the developers to have frequent builds. The common practice is that whenever a code commit occurs, a build should be triggered.

TASK:

Here we will be creating an infrastructure with following specifications or steps by writing the code in Terraform:-

1. Create the key and Security Group which allows the port 80( for HTTPD server).

2. Launch an EC2 instance.

3.In this EC2 instance use the key and security group which we have created in step 1.

4. Launch one Volume using EBS and mount that volume to /var/www/html of the instance.

5. A code is present in GitHub repository which also has some images. We’ll copy the GitHub repository code into /var/www/html.

6. Create an S3 bucket, and deploy the images from GitHub repository into the S3 bucket with public readable permission .

7. Create a Cloud-front using S3 bucket(as origin) and use the CloudFront URL to update in code in /var/www/html.

Configure aws terraform profile :- Which helps to integrate Terraform code with AWS

provider "aws"{
region ="ap-south-1"
profile="nikita"
}

STEP1:-CREATE THE KEY AND SECURITY GROUPS

Create the key:-

//Create KEYresource "tls_private_key" "keyt1"{
algorithm="RSA"
}
resource "aws_key_pair" "key"{
key_name="keyt1"
public_key="${tls_private_key.keyt1.public_key_openssh}"
depends_on=[
tls_private_key.keyt1
]
}
key “keyt1” is created

Create the Security Group which allows the port 80:-

//Create Security Groupsresource "aws_security_group" "task1_sec_groups"{
name="task1_sec_groups"
description="Allow SSH and HTTP"
vpc_id="vpc-9ff7eaf7"

ingress{
description="HTTP"
from_port=80
to_port=80
protocol="tcp"
cidr_blocks=["0.0.0.0/0"]
}
ingress{
description="SSH"
from_port=22
to_port=22
protocol="tcp"
cidr_blocks=["0.0.0.0/0"]
}
egress{
from_port=0
to_port=0
protocol="-1"
cidr_blocks=["0.0.0.0/0"]
}
tags={
Name="task1_sec_groups"
}
}
security group “task1_sec_groups” created

STEP2 & 3:-LAUNCH AN EC2 INSTANCE AND USING KEY AND SECURITY GROUPS THAT WE CREATED IN THE PREVIOUS STEP

//Create AWS Instanceresource "aws_instance" "web"{
ami ="ami-0447a12f28fddb066"
instance_type ="t2.micro"
key_name="aws_key_pair.key.key_name"
security_groups=["task1_sec_groups"]
provisioner "remote-exec"{
connection{
agent="false"
type="ssh"
user="ec2-user"
private_key="${tls_private_key.keyt1.private_key_pem}"
host="${aws_instance.web.public_ip}"
}
inline=[
"sudo yum instll httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags ={
Name="webos1"
}
}
instance “webos1 ” created

Print the avaialability zone and the Public IP

//Print Availability Zoneoutput "az"{
value=aws_instance.web.availability_zone
}
//Print Public IPoutput "pubip"{
value=aws_instance.web.public_ip
}

STEP4:-LAUNCH ONE VOLUME USING EBS AND MOUNT THAT VOLUME TO DEFAULT WEB PATH OF THE INSTANCE:-

Create one volume of EBS:-

//Create EBS Volumeresource "aws_ebs_volume" "ebs1" {
availability_zone = aws_instance.web.availability_zone
size = 1
tags = {
Name = "myebs1"
}
}
ebs “myebs1” is created

Attach that EBS volume to the Instance

//Attach EBS Volume to Instanceresource "aws_volume_attachment" "ebs_att"{
device_name="/dev/sdd"
volume_id=aws_ebs_volume.ebs1.id
instance_id=aws_instance.web.id
force_detach=true
}
resource "null_resource" "pubip" {
provisioner "local-exec" {
command = "echo ${aws_instance.web.public_ip} > publicip.txt"
}
}
EBS “myebs1” is attached to Volume

STEP5:-A CODE IS PRESENT IN GITHUB REPOSITORY WHICH ALSO HAS SOME IMAGES.WE’LL COPY THE GITHUB REPOSITORY CODE INTO /VAR/WWW/HTML

resource "null_resource" "mount"{
depends_on=[
aws_volume_attachment.ebs_att,
]
connection{
agent="false"
type="ssh"
user="ec2-user"
private_key="${tls_private_key.keyt1.private_key_pem}"
host="${aws_instance.web.public_ip}"
}
provisioner "remote-exec"{
inline=[
"sudo mkfs.ext4 /dev/xvdd",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/VermaNikita/HybridMultiCloudt1 /var/www/html"
]
}
}

STEP6:-CREATE AN S3 BUCKET AND DEPLOY THE IMAGES FROM GITHUB REPOSITORY INTO THE S3 BUCKET WITH PUBLIC READABLE PERMISSSION

resource "aws_s3_bucket" "task1bucket" {
bucket = "niktask1bucket"
acl ="public-read"
force_destroy = "true"
versioning{
enabled = true
}
tags = {
Name = "task1bucket"
Environment = "Dev"
}
}
s3bucket “niktaskbucket” created

Download object/image in S3 bucket

resource "aws_s3_bucket_object" "task1bucket_object" {
depends_on =[aws_s3_bucket.task1bucket ,]
key = "aws.png"
bucket = "${aws_s3_bucket.task1bucket.id}"
source = "C:/Users/Ajit/Desktop/BLOG/aws.png"
acl="public-read"
}

STEP7:-CREATE A CLOUDFRONT USING S3 BUCKET(AS ORIGIN) AND USE THE CLOUDFRONT URL TO UPDATE IN CODE IN /VAR/WWW/HTML

resource "aws_cloudfront_distribution" "task1_cloudfront" {
origin {
domain_name = "niktask1bucket.s3.amazonaws.com"
origin_id = "S3-niktask1bucket-id"
custom_origin_config {
http_port = 80
https_port = 80
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
enabled = true
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-niktask1bucket-id"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
resource "null_resource" "remote" {
depends_on = [
null_resource.mount,
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.web.public_ip}"
}
}
cloudfront “web” is created

# terraform init((to install the necessary plugins)):-

# terraform validate (to validate the code):-

# terraform apply (this will be the single command about which the title talks)

IP:- 13.127.143.76

It’s time for some automation now using JENKINS:-

  • Set up terraform in your base OS , in my case it’s redhat 8
  • Run jenkins and create a freestyle job and then configure it :-
Downloads(or pull) all the necessarycodes from the github
Every minutes code will be updated
Execute shell commands
  • If terraform is installed and configured properly in your base OS, your task will successfully run.

Thanks!!

If any correction you want to do then please note it down in the comment.

GITHUB LINK:-https://github.com/VermaNikita/HybridMultiCloudt1

--

--

VermaNikita

Mistakes increase your experience and experience decrease your mistakes … If you learn from your mistakes then other will learn from your success…