Infrastructure as Code: The New Normal

Samuel Nwoye
4 min readJan 10, 2019

--

An acquaintance rightly advised that while one may understand complex subjects, one must always strive to be clear and precise in conveying the same to another. For this walk-through, we aimed to keep it simple, short and succinct.

Up until now operators, system administrator, and site reliability engineers carry out their tasks by clicking through a graphical user interface or writing some scripts on the fly. Why always do the same thing prone to human error going through a procedure that could be automated? Tasks such as creating a repository, adding a new user, provisioning or configuring a server are processes that could be codified — and are reusable and reproducible.

Yea sure automation of operations and system administrative tasks by way of infrastructure as code may seem daunting and to most like a waste of time. However, the gains of including this as a must-do in your software development workflow are priceless. It gives one the liberty to make the best decisions of what they need and puts you in charge by giving a well-documented model of what works, and a potential to improve on it just like the applications we code.

In this walk-through, we have demoed a kind of “hello, world” of infrastructure as code to give a quick view of this domain. I have made a few assumptions that you have an AWS account and have retrieved an API access id and secret key. See signing up and creating a user on AWS. I also assumed that you have downloaded and installed the terraform binary for your operating system.

The repository for this demo can be found here. In there we have just a folder with four .tf files and one .sh file which respectively represents the terraform and bash scripts. These files exemplifies a series of task an operator would perform in standing a single web server and provisioning it with an application. Doing this in production is a little more complex than this though; never to worry you will get there.

Let us get you to understand what the lines of codes within these files intend to accomplish:

The resources.tf is where we create the actual resources and components we desire. The “provider” object sets our specific cloud provider and the authentication mechanism for gaining access to it. The “data” object collects existing resources on our cloud provider. In this case, we are getting an amazon machine image (AMI) of an ubuntu OS. The “resource” object creates the actual resources — key pair, security group (firewall) to control inbound and outbound traffics into our server and finally the webserver.

In the variables.tf we defined and initialized some variables http_port, and ssh_port to avoid hot coding these values within the resources.tf and keep things modular.

The inputs.auto.tfvars is where you pass in the values to variables declared in variables.tf especially values subject to modification. I could choose to use a new instance type instead of t2.micro and I would not want to be initialized within the variables.tf

The outputs.tf state the attributes we could like to be returned after the script has run successfully. So we want to get the public IP and domain name of the instances after it has been provisioned.

User-data.sh is a bash script with which to bootstrap the instance on launch. So we are simply printing the sentence into an index.html file created on the fly. Then using busybox we orchestrate some sort of webserver listening on port 80 and running in the background.

Workflow Procedures
Step 1: clone the repository and open the folder in your editor of choice ( make sure to have an integrated terminal with Git installed).

Step 2: export your AWS credentials into the environment variable namespace.
Run:
export TF_VAR_secret_key=“insert your secret key here”
export TF_VAR_access_key=“insert your access key here”

Step 3: trusting you have the OpenSSH application within the terminal create a directory “keys” within the IAAC folder and run the under-listed commands to generate key pair for your server.
Run:
ssh-keygen -t rsa -b 4096 -C “webserver key”
on prompt, type: ./keys/server_key and hit the “Enter” button

Step 4: Download the Terraform AWS provider plugin
Run: terraform init

Step 5: Validate that the codes are syntactically correct
Run: terraform validate

Step 6: Get an overview of the resources to be deployed and their attributes
Run: terraform plan

Step 7: Deploy these resources on the cloud providers platform
Run: terraform apply

Step 8: Copy either of the outputs — server_ip or server_dns — unto an internet-enabled browser to observe the result. Voila! you have your Hello world of infrastructure as code up and running.

Need Help with Infrastructure Orchestration, Automation and Security?
I am always happy to help. Feel free to reach out on LinkedIn or Twitter. If you enjoyed this content, please buy me a coffee. Thanks for reading.

--

--

Samuel Nwoye

I am an Infrastructure engineer keen on security. I am passionate about reliable and secure software development and delivery.