Let’s use Terraform: Part 1

Mazhar Ahmed
The Devs Tech
Published in
4 min readFeb 10, 2020

Terraform is developed by HashiCorp and provides us facilities to write our infrastructure as code. Recently I have used Terraform to write one of my project’s infrastructure. Today in this article, I will share the experience with you step by step so that you can have a basic idea of Terraform.

Terraform by HashiCorp

Terraform supports multiple cloud provider like AWS, GCP, Azure etc. Today, we will see how to work with Terraform. We want to create an EC2 instance and a S3 bucket using Terraform.

First step is to install terraform cli tool, aws cli tool and configure the credentials etc. When all the basic requirements are fulfilled, we are ready.

Let’s create a new repository for terraform like ‘project-name-terraform’. It’s better to use different repositories for different sections of a project and add submodules. This way, we can manage permissions in a more organized way. I have created a repository like bellow:

.
├── my-key.pem
├── ec2.tf
├── s3-policy.json
├── s3.tf
├── .gitignore
├── LICENSE
├── CONTRIBUTION
└── README.md

My .gitignore file look like this:

**/.terraform/*# .tfstate files
*.tfstate
*.tfstate.*
# .tfvars files
*.tfvars

These are the files generated by terraform. We can skip these. But keep this in mind that terraform keeps state in these file, so if you want to run terraform in another PC, you will not get the states (that means terraform will not know that you already have created resources / servers by terraform apply).

We actually don’t want to create VPC, Subnet, Security Group, PEM certificate profile etc using terraform for many reasons. Because there are limitations of creating those for an user in a region also by default a default resource is created. So, we will manually create those and copy the ARN for our terraform to work. There is also another reason, those resources are fixed for an architecture for most of people, we don’t want to automate fixed resources, this is another type of security that you have to have access to the AWS console.

Every *.tf file should contain a provider or at least one of the file in a directory. A provider is the cloud information and credentials. At the beginning of ec2.tf file, let’s add our AWS provider with our AWS credentials like:

provider “aws” {
access_key = “ACCESS_KEY_HERE”
secret_key = “SECRET_KEY_HERE”
region = “REGION_HERE”
}

Now let’s fill the rest of the document with (you need to allow ssh port on the security groups):

resource "aws_instance" "web" {
ami = "ami-0a7f2b5b6b87eaa1b" # this is ami for Ubuntu 18.04 LTS
instance_type = "t2.micro" # this is instance type
associate_public_ip_address=true
key_name = "my-key" # name of pem key in aws console
security_groups = [ # previously made security groups
"security_group_id_1",
"security_group_id_2 if any"
]
subnet_id = "subnet_id_here" tags = { # we want to add as many tags as possible, it's helpful
Name = "project-name"
Mode = "dev"
}
provisioner "remote-exec" { # optional
inline = [ # this gets executed on created ec2 instance
# install docker and compose on the first run
"sudo apt update",
"sudo apt upgrade -y",
"sudo apt install docker.io -y",
"sudo apt install docker-compose -y",
]
# connection to be used by provisioner to perform remote executions
connection {
# use public IP of the instance to connect to it.
host = "${aws_instance.web.public_ip}"
type = "ssh"
user = "ubuntu" # username from the OS image
private_key = "${file("./my-key.pem")}" # file path
timeout = "10m" # let's wait for instance
agent = false
}
}
}

Now, we can init and start terraform. Let’s execute this from terminal:

terraform init
terraform apply

After the process is completed, we can see the details of the created resource info in the shell. We can see the EC2 instance IP, name blah blah blah. Now let’s create a public AWS S3 bucket. For this we need a bucket policy file in json format. Let’s create a file named s3-policy.json with the following:

{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"PublicRead",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::bucket-name/*"]
}
]
}

Now, let’s create another terraform file named s3.tf in the directory:

# s3 bucket for file upload
resource "aws_s3_bucket" "b" {
bucket = "bucket name here"
acl = "public-read"
region = "your region"
policy = "${file("s3-policy.json")}"
tags = {
Name = "project name"
Mode = "dev"
}
}

Again let’s run:

terraform show

It will show the changes it wants to write to the AWS next. You can see that this time terraform will only create s3 bucket because it kept in its stage file that it has already created the EC2 instance. Let’s execute:

terraform apply

Again, we will see the information about the s3 bucket it created on AWS.

You can find more documentation on Terraform website.

You can find the source code for this article here https://github.com/mazhar266/terraform-ec2-s3

Thank you very much.

--

--