What Docker, McNuggets and Francis Bacon have in common: a tale of consistency.

Sam Ryan
Skiller Whale
Published in
9 min readDec 6, 2022

“Consistency is the foundation of virtue.” — Francis Bacon

Let’s start with something we can all (probably?) agree on: when it comes to code, consistency is a good thing.

One of the key advantages of Docker is that it improves consistency. From the docker documentation:

What can I use Docker for? Fast, consistent delivery of your applications

As we already (surely!) agreed, this is a huge positive — if your code runs in an environment that is consistent with where it is developed and tested, you can rely more on your tests, reduce unexpected behaviour and keep users happy.

Docker does a good job of this. If it didn’t it would not be so successful. But nothing is perfect. There will still be times when you yell “but it worked on my machine!”

In this article, I’ll explain some of the reasons consistency breaks down, and provide actionable steps you can take to avoid that happening.

The McNugget analogy

Here are some key Docker concepts, defined without any kind of fun analogy.

  • A container — a relatively isolated execution environment for an application or a part of an application.
  • An image — a fixed, consistent template for creating containers.
  • The Dockerfile — source-controlled instructions that define how new images are built.

And now I invite you to think about them in terms of the creation of the iconic McNugget.

Let’s imagine a huge chain of restaurants — call it McDocker’s.

This restaurant wants to provide a consistent experience of chicken nuggets across the country.

How might they do this?

They could send all the restaurants the same recipe.

This is a good start, but the restaurants might use chicken of different quality, different cooking oils, etc.

So they send all the restaurants the same ingredients.

This is better, but the recipe is still complicated; someone might make a mistake. And every restaurant needs the equipment to make chicken nuggets from scratch.

So they make all the chicken nuggets in one factory, inspect them to make sure they’re good quality, freeze them, and distribute the frozen nuggets to the restaurants. At the restaurant, they just need to be warmed up, grouped in the necessary sizes, and served.

These nuggets may not be the best, freshest nuggets, but they are very consistent. For the purposes of this article, confidence that the nugget will always be served as expected is the apex of nugget achievements.

In the exact same way (or in a similar enough way that I can get away with this analogy) you want your application to behave the same across developer machines and deployment environments.

Specifically: what you want to be consistent, in the end, is the containers that run your applications — the warmed up chicken nuggets that make your customers happy.

What Docker gives you is a framework for making, freezing and warming up your applications.

And now we can use a more fun way of defining the different elements of Docker:

The ingredients are your application source code.

The recipe is your Dockerfile.

The frozen nuggets are your images.

The warmed up nuggets are your containers.

Great analogy, I hear you say. But I’m still not getting consistent nuggets application behaviour.

Let’s look at the analogy again:

If we use the same ingredients and the same recipe in the same automated factory, our nuggets should be nearly identical. So where are the weak links? Where is the potential for inconsistency?

  • After the nugget has been warmed up. You might drop it on the floor, or add the wrong sauce. In other words, once your container is running, it can change.
  • In the warming process. You might overcook it. AKA: running the container.
  • Factory changes with unexpected consequences. The factory might change the recipe without realising that now the time for warming needs to be different. This equates to changing the Dockerfile or source code.

Let’s look at a few potential consistency problems disasters, and how you might avoid these, both in the making of a McNugget (for fun), and in the writing of a program (for useful):

Inconsistency after starting the container: Someone dropped the nugget on the floor

Examples of this could be an attack or hidden bug that causes a crash or slowdown. Or perhaps someone changed the configuration of the running container, and your app is now behaving oddly.

What do you do?

Wash it and serve it anyway? No, please and thank you.

The easiest and safest thing to do is throw it away and get a new one out of the freezer. This is the advantage you get by making containers disposable.

Disposability is essentially trying to make sure the answer to the question “Can I just replace this container?” is “Yes”, as often as possible.

If you spend 4 hours crafting a culinary masterpiece, then drop it, you’re going to be a lot more tempted to serve your customers floor chicken than if you’d only warmed up a chicken nugget.

Don’t serve your customers floor chicken. Make your containers disposable.

Inconsistency when running the container: so many things can go wrong

Images (or, frozen nuggets) version a lot of configuration in one place, but you can still do things when creating and running a container that can introduce inconsistency.

You should be aware of these, and consider carefully if using them is definitely worth the potential loss of consistency.

Disaster 1: Not enough nuggets 😱

That is, resource (nugget size) and load (nugget demand) differences:

Your nugget tester is only focused on taste, so he just has one nugget at a time, and you make him special, tiny nuggets, so he doesn’t get full.

You know that most people don’t want tiny nuggets, so you ship out bigger ones to the restaurants. But you forgot that they’ll take longer to cook than the tiny nuggets. Now your restaurants can’t keep up with demand, and you have long queues and hangry customers.

If you run your containers with different resources (CPU and memory) in different environments you risk inconsistent behaviour under load.

Say you test that 1 CPU and 2GB of memory supports 1000 users. Can you assume 10 CPUs and 20GB memory will support 10,000?

They might, if you’re lucky, but there is a high risk your application doesn’t scale in this simple (linear) way, and of bottlenecks appearing that you didn’t spot at lower loads.

Make sure you know how many nuggets you’ll need, and what size. Load test your containers with the same configuration as production.

Side note: ideally, to scale your containers, you want to scale the number, not the size of them.

You want to serve boxes of 20 nuggets, not a nugget 20× the size (although I would quite like to see that).

Disaster 2: Too 👏 Much 👏 Ketchup 👏

That is, environment variables. By necessity, you will need some difference in configuration between your environments, and sometimes this will cause problems.

The trick here is to minimise that configuration as much as possible.

Think of environment variables as sauce. They add some flavour to your nuggets, but you don’t want to add too much, or you can’t taste nugget. You may as well drink ketchup out of a glass (if that’s your thing though, you do you, I’ll have your nuggets).

Don’t add too much sauce. Keep your environment variables to the ones you really need.

Disaster 3: Bad chips 😱

Docker can virtualise away many differences between environments, but not chip architecture.

If your containers need to run on Intel/AMD x86 chips and ARM-based ones (The Raspberry Pi, Apple’s M1 chip and AWS’s graviton processors are all ARM-based), you will need to test your container on both, and potentially provide a different image.

Docker does try and smooth this process with multi-arch builds, but if your containers need to support multiple architectures, that’s an inconsistency you have to accept, and plan for.

Mind your chips, not just your nuggets. Take care if you need to support both ARM and x86.

Disaster 4: Different boxes

The quality control department at McDockers has just realised that restaurants are all sourcing their own nugget boxes and they are different from each other, so 6 nuggets don’t always fit in a box. Similarly, sometimes the wider environment affects a container’s behaviour e.g. network restrictions such as firewalls.

If production has stricter firewalls than development (or other network restrictions, e.g. AWS security groups), your container may not be able to access resources it needs to.

Docker by itself won’t help you with this. For consistency in networking and the broader environment, you’ll need to turn to Kubernetes or Infrastructure-as-Code tools such as Terraform to help you achieve the next level of environmental consistency.

Docker can’t control things outside your container — for extra consistency, use IaC.

Inconsistency when building: mystery meat 🤢

To some extent these are inevitable, as the purpose of building is to update the image, often with the aim of changing the application behaviour.

However, there are some avoidable examples of inconsistency at this stage:

Unpinned base image versions in a Dockerfile

For example, specifying a base image with FROM node or FROM node:latest means that the Node.js base image will change when you build after a new version is released. In this case, that includes a release of a new major version, which could introduce massive inconsistencies in your container.

Most official images will have useful tags (often using semantic versioning), that allow you to improve this consistency by specifying a version of the image.

How specific ranges from completely unpinned (FROM node) to pinning the exact digest (FROM node@sha256:c47a2c61e635eb4938fcd56a1139b552300624b53e3eca06b5554a577f1842cf).

A common middle ground to start with would be FROM node:18.12 . This will pin the minor version, so rebuilding should give you any important security updates and bug fixes, but you won’t get any new Node.js features without specifically updating the Dockerfile.

For a bit more detail on version pinning, this blog post by Nick Janetakis explains it nicely.

Make sure you know what chicken you’re getting. “I’ll have whatever you’ve got” risks you getting bad chicken. Pin your base images.

The same goes for unpinned package versions.

When installing packages in your Dockerfile using apt-get , you can specify no version (as with the nuggets-fryer package below), or you can pin to a specific version (as with the sauce-adder package).

RUN apt-get update && apt-get install -y \\
nuggets-fryer \\
sauce-adder=1.3.*

An extra reason to pin package versions is Docker’s layer cache, which stores images in layers based on each instruction in the Dockerfile.

For RUN instructions this cache invalidates only if the command itself changes. If you specified no version for the package, the command would always be the same, and you'd use the version of the package in the cached image.

This leads to inconsistencies if you perform the build in different places, with different caches (A dev’s box, or a CI pipeline, for example).

For more detail on this, you can see Docker’s documentation:

To avoid bad or unexpected chicken, pin your package versions.

At this point, you would be forgiven for thinking: “so, if you can still get inconsistent results, what’s the point of freezing nuggets at all?

Answer: if we wrote a list of potential inconsistencies without Docker, it would be a much, much longer list.

For a constantly-improving chicken nugget, we still have to manage the distribution of this new version to all the restaurants, but we don’t have to:

  • manage the new batches of these ingredients separately.
  • keep track of which is the latest recipe, at every restaurant.
  • retrain all the restaurant staff in every new recipe — the nuggets still get warmed the same way.
  • maintain all the nugget-making equipment, at every restaurant.

Similarly, Docker helps reduce how much you need to:

  • keep track of every version of every library and framework in every environment separately.
  • ensure you’re using the latest deployment process and tools, on each environment.
  • remember the crucial new step someone secretly added to the 50-step deployment process that you’ve been doing for the last year.
  • test your application in multiple versions of multiple operating systems.

The lesson here is:

You can’t have complete consistency. But that doesn’t mean you shouldn’t try to have more.

Francis Bacon Says: Knowledge is Power — Docker is Nuggets

--

--

Sam Ryan
Skiller Whale

Curriculum Lead @ Skiller Whale, Parent, Tenor & Analogy specialist.