Jody Lent
Jody Lent
Jul 22, 2017 · 3 min read

I took three major points out of this one, Jeff:

# Do Words Have Meaning?

For all that fancy code we write, the tech industry is generally poor at disambiguation in language, and at persuasion in general. Specificity and clarity are both indicators of a problem well understood, but they also both lend force and substance to an argument.

There is quite likely a talk in itself to be had in this point alone.

# What is the problem we (II or CM) are trying to solve?

A storyteller should begin with the beginning, a fairy tale with the words “Once upon a time…,” and an engineer with the question “What is the problem we are solving?” And the answer is: “Trick question. What problemS, plural, are we solving?” There are at least 3 major ones:

  1. Reliable, repeatable deployments
  2. Updates to running production systems
  3. Making tradeoffs between complexity and reliability

Immutable Infrastructure (“II”) and Config Mgmt (as the consultants will sell it to you) take different approaches to different sets of the above. The former is seeking to make #1 happen by reducing dependencies, and to eliminate #2 entirely.

Config Mgmt, in the sense of CASP (Chef, Ansible, Salt, Puppet) technologies, is attempting to do all 3, but generally speaking, it attempts to do them for things with IP addresses, hostnames, and sshd running on them — in a word, for servers. Which brings me to major point 3:

# Distinguishing Application from Server Configuration

Application and server configuration are NOT the same. Not the same ballpark, not the same league, not the same SPORT.

One attempts to solve problem #2 from above, and gets #1 as a bonus (app config). In consequence, it throws 3 out the door (kinda like an eventually-consistent database :D )

Server configuration, on the other hand, generally does a very poor job of live production application runtime configuration. Tell me if this isn’t an EXACT copy of scenario you’ve encountered:

“Ok, deploy the artifact, run this Ansible script that copies it to the server and runs Puppet, which oh we forgot to include that repo in the PR, so it needs another change, then RERUN Puppet, then find out it didn’t define a service, so let’s bounce myappd by hand, now it works!”

Immutable Infrastructure doesn’t *eliminate* the need for configuring whatever servers you DO have — it eliminates the problems of trying to use the wrong tool for the job. It lets you say “docker pull, docker run, docker’s great, now we’re done”. It means that you make and ship a new container EVERY time you make a config change, and yes, it adds VOLUMES of complexity in orchestration and deployment.

But the point is that you only depend on one thing in production: artifact deployed means we’re up.

Personally, I **love** the setup at GrubHub, in which we are one or two steps away from it: we deploy Docker containers, but they can pull startup or runtime application config over the network. But the base image, packaging, copying it into the right directory, ensuring correct permisssions, discovering 2 copies of a package, Puppet manifests that DON’T have service resources, or aren’t REALLY idempotent? I don’t miss those at ALL.

So “immutable infrastructure” may not REALLY be where we are yet, but the appeal of it is pretty high. Who wants to spend hours and months and years on the plumbing when what you wanted was the water? Immutable infrastructure is like buying bottled water instead of having running water.

)