Infrastructure as Code: The New Normal

An acquaintance rightly advised that while one may understand complex subjects, they must always strive to be clear and precise conveying same to another. For this walk-through we aimed to keep it simple, short and succinct.

Up until now operators, system administrator, and site reliability engineers carry out their tasks clicking through a graphical user interface or writing some scripts on the fly. Why always do the same thing prone to human error going through a procedure that could be automated? Tasks such as creating a repository, adding a new user, provisioning or configuring a server are processes that could be codified — and are reusable and reproducible.

Yea sure automation of operations and system administrative task by way of infrastructure as code may seem daunting and to most like a waste of time. However the gains of including this as a must do in your software development workflow is priceless. It gives one the liberty to make best decisions of what they need, and puts you in charge by giving a well documented model of what works, and a potential to improve on it just like the applications we code.

In this walk-through we have demoed a kind of “hello, world” of infrastructure as code to give a quick view of this domain. I have made a few assumptions that you have an AWS account and have retrieved an API access id and secret key. See signing up and creating a user on AWS. I also assumed that you have downloaded and installed the terraform binary for your operating system.

The repository for this demo can be found here. In there we have just a folder with four .tf files and one .sh file which respectively represents the terraform and bash scripts. These files exemplifies a series of task an operator would perform in standing a single web server and provisioning it with an application. Doing this in production is a little more complex than this though; never to worry you will get there.

Let us get you to understand what the lines of codes within these files intends

The resources.tf is where we create the actual resources and components we desire. The “provider” object sets our specific cloud provider and the authentication mechanism to gaining access to it. The “data” object collects existing resource on our cloud provider. In this case we are getting an amazon machine image (AMI) of an ubuntu OS. The “resource” object creates the actual resources — key pair, security group (firewall) to controls inbound and outbound traffics into our server and finally the webserver.

In the variables.tf we defined and initialized some variable http_port, ssh_port to avoid hot coding these values within the resources.tf and essential keep things modular.

The inputs.auto.tfvars is where you pass in the values to variables declared in variables.tf especially values subject to modification. I could choose to use a new instance type instead of t2.micro and that I would not want to be initialized within the variables.tf

The outputs.tf here we state the attributes we could like to be returned after the process has being ran successfully. So we want to get the public IP and domain name of the instances after it has being provisioned.

User-data.sh is a bash script with which to bootstrap the instance on launch. So we are simply printing the sentence into an index.html file created on the fly. Then using busybox we orchestrate some sort of server implementation listening on port 80 and running in the background.

Workflow Procedures

Step 1: clone the repository and open the folder in your editor of choice ( make sure to have an integrated terminal with git installed).

Step 2: export your AWS credentials into the environment variable namespace.
Run:
export TF_VAR_secret_key=“insert your secret key here”
export TF_VAR_access_key=“insert your access key here”

Step 3: trusting you have the openssh application within the terminal create a directory “keys” within the IAAC folder and run the underlisted commands to generate key pair for your server.
Run:
ssh-keygen -t rsa -b 4096 -C “webserver key”
on prompt, type: ./keys/server_key and hit the “Enter” button

Step 4: download the terraform aws provider plugin
Run: terraform init

Step 5: validate that the codes are syntactically correct
Run: terraform validate

Step 6: get an overview of the resources to be deployed and their attributes
Run: terraform plan

Step 7: deploy these resources on the cloud providers platform
Run: terraform apply

Step 8: copy either of the outputs — server_ip or server_dns — unto an internet enabled browser to observe the result. Voila you have your Hello world of infrastructure as code up and running.

Need Help with Infrastructure Orchestration and Automation?
Am always happy to give a hand and help. Please feel free to reach out on LinkedIn, Gmail or Twitter.

I’m a value oriented professional all about automation of software delivery. I have an affinity for Terraform, Kubernetes, AWS, Prometheus, Elastic and Jenkins.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store