The history and future of Infra-As-Code

Dank Tec
4 min readNov 10, 2023

--

Once upon a time everything was configured manually. *nix servers required physical media for setup, installing packages and config was done on a physical console. Network routing and security was a quagmire of scripts and utilities.

We’ve come a long way since then. With a one-liner like this:

aws ec2 run-instances --image-id ami-0abcdef1234567890 --instance-type t2.micro --key-name MyKeyPair

We have some magic at our fingertips. The Cloud is launching virtual hardware, attaching volumes containing a preconfigured operating system, configuring network & security boundaries. Within a few seconds we have a usable system available on the internet. And not only that, this system can easily be scaled and re-sized with minimal fuss!

This capability leads us to value the uptime of a system much less and refocus our values on aspects like ease of configuration / repeatability / extensibility / modularization.

These principles might sound a lot more like software engineering than sys admin. When it comes to cloud configuration and maintenance it’s not one or the other, but a unique blend which gives rise to ideas like DevOps / Platform Engineering and other fresh disciplines.

The Rise Of Infa-As-Code

This brings us to the IAC topic. I aim to shed light on where the industry came from, and how industry values have changed and adapted over time in step with platform capabilities — and the shortcomings of each previous infra-as-code tool.

Puppet & Chef

The first of the infra-management tools aimed to solve the issue faced by many orgs — how do we quickly, easily, safely and consistently manage our distributed server environments?

Picture massive scale orgs with thousands of Linux and Windows systems of different classes — needing to be updated, upgraded, modified, reported on and generally kept in good working order. It’s a herculean task requiring many teams, even then it results in information siloing and potentially inconsistent system-wide changes.

Puppet and Chef came along around 2005 to solve these challenges. They utilized an agent based architecture — every system you want to mange needs to have a special piece of software which receives requests and performs work.

Challenges of the day lead to architecture and design principles to solve them. Many distributed systems were configured to “check-in” to a central system which would feed them instructions on what changes to make. The agent is smart enough to maintain system state and changes were not repeated — efficiency through idempotency was achieved.

Great stuff! Way better than logging into systems or writing custom shell wrappers. This improved teams’ efficiency and outcomes greatly, but also wielded more power and required a higher degree of discipline and understanding of the centralized automation in addition to the systems it was serving!

This works great for servers which are already deployed and configured. The question: “how do i get my physical systems to the state where it can have an automation agent running” was about to be answered by the monumental proliferation of the cloud.

Ansible

Enter Ansible in 2012. One of my all-time favourites. Ansible sought to solve the rigidity of previous tools by offering a completely agentless design and composable domain specific language based on YAML.

This revolutionary thinking by Ansible designers gave us the ability to write modular, flexible code in one-place, and maintain a great deal of customization and control in how it gets deployed. This was the dawn of the “Playbook”.

Ansible is an execution framework on top of the SSH protocol on Linux with shell-wrapped Python-executed commands (WinRM and PowerShell for Windows systems) Allowing us to write code which runs locally AND remotely. The state of actions is managed at the module layer, which executes on the target system, and reporting is handed back to the controller.

Modules capture the STDOUT and STDERR along with return codes of commands run on destination machines in a JSON object which is used for validation and reporting on the controller.

Ansible offers tight, feature rich integrations with cloud providers. This means we can use Ansible to launch our systems, configure our cloud, AND log into those systems to configure them and maintain them for the rest of their lifecycle.

The only downfall of Ansible IMO is the total flexibility which can lead to a plethora of custom designs that can be difficult to grok if you didn’t write them yourself!

Next Up… Pulumi!

… The rest of this article is published on the FREE Dank Tec Substack

Please show your support by checking it out.

--

--

Dank Tec

Hi Friends! I dig into the trenches of nuanced topics to delivering concise articles to help you contend with a complex landscape https://danktec.substack.com/