Do you master Configuration Logic?

Osher El-Netanany
Israeli Tech Radar
Published in
10 min readNov 19


How many ways can you name for configuring an application? What happens when an application uses a few sources? What happens when they conflict? Must it all end with a configuration hell? Let’s have a look at a few real-world examples, and some architecture best practices.

How justified is the horror in Andy’s eyes? (image from here)

A given application may consume runtime settings from the following categories:

  1. Baked-in defaults — these are defaults that come with the distribution. In the real world, not every setting can have a default. But when they do — these defaults are baked into the distribution. Either in the form of programmed hardcoded values or as a basic configuration file.
    These defaults provide the most vanilla settings. They are the best guesswork the application makes of the target environment. These defaults may be later overridden.
  2. Environment variables — values that are concrete to a specific environment. Traditionally, they are set on the level of the OS and are the same for all the processes it runs. ️ But POSIX-based shells support ad-hock environment variables as part of the invocation command. This allows a deviation from their initial design.
    Process isolation and containerization made environment variables more powerful. There, the distribution is the sole process in its context/container.️
    ⚠️️ A mentionable gotcha is when an environment variable contains a JSON document that is merged into the configuration model or a section of it. This acts like a deployment configuration file, but is still operated like an env-variable 😉.
  3. Deployment configuration files — files that let ops provide several settings for the application in a single place. That, instead of expressing them all in environment variables.
    The application may find these files in few places. They could be in an agreed path (like in /etc/my-app/…). They could be found up the directory tree of the current work directory (like .rc files). Or in a path that is passed to the application in any of the other configuration methods. Some applications attempt to support all of them (for good or for evil).
  4. CLI-switches — these are passed to the process arbitrarily upon invocation. They may differ from run to run.
  5. Configuration server — in this case, the application starts with a minimal configuration seed. This seed only helps it get to the config service, and authenticate with it. Then it can pull any runtime values from there.
    Some implementations take one more step. They notify the application of values that changed after it was started. The application, in turn, decides if it should restart any of its internal mechanisms or the entire process.

The logical configuration Challenges

Figure this configuration out. (img from here)

All configuration challenges divide into two:

  1. Protocol
    The bits and bytes as in HOW to provide the values in the right way. This deals with format . Quote signs, commas and brackets of JSON. Indentations of YAML. Shell quoted or evaluated expressions, multi-line values and base64 encoding when necessary.
  2. Logic
    let’s assume that all values are provided correctly in all different sources. What is the precedence order between the different configuration sources? What are the effective values in runtime?

While both are problems, I daresay that the bigger problem of the two is the latter.

A real-world application pulls values from different sources that cascade each other. It could get very puzzling to predict the effective set of values in runtime.

The worst case is that every configuration parameter has different configuration sources and a custom hierarchy between them. I see it a lot, and in most cases, it’s completely unjustified.

When it happens it results in tons of internal lore. At best — this lore is documented in length. But even then again — who reads lengthy documents? So it’s most commonly passed orally, and too often forgotten.

A good real world example for a bad practice

I’ll give the worst examples of a custom logic I met in the real world. Both from the same place. Excuse me, but I’m obliged to strip the actual names...

Confused… (img from here)

The LRU cache size of a service was configured as follows:

When environment variable X is found — use its value.
Otherwise — when environment value Y is found — then default to hardcoded V1
Otherwise — default to V2, unless CLI switch -z is provided — which implies to use its value, or default to V3 if the switch is provided without a value.

In the same application, a database connectivity was configured as follows:

Expect the connection string in a .dbrc file.
When it’s not found — if all variables of DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASS are found — use them to construct a connection descriptor object. Otherwise, expect the DB_HOST to be a full connection string.
Otherwise — default to localhost:$defaultPort with user root and password root and use database public.
In any case, when the CLI switch --localdb is provided — ignore any environment variables or .dbrc file.

How did they get to that? Each module had hardcoded baked-in defaults in the class/module code. Each module implemented a different logic and pulled its configuration from different sources.

The implications: Ops Mess

These practices make Ops a mess, even when it’s well documented.

Somebody has to clean that (img from here)

When each parameter has a separate logic, people have to double-check the knowledge base for every parameter. This makes them careful and reluctant. When the knowledge base is oral — they introduce bureaucracy. All this slows things down.

But there’s worse. If most parameters follow the same logic, but there are a few exceptions — exceptional parameters are more likely to catch someone off-guard.

It takes experience and foresight to avoid this hustle. Architect guidance in the form of a set of best practices can funnel developers to the right way.

My configuration Best Practices

In what Environment do you work? (img from here)

My answer to the challenge is the following principals:

Single cascading order for all values in the application

One logic for all parameters. No tricks. No glitches, No gotchas.
(and a single place to document it).

Simplicity builds trust

Limit the number of configuration sources

Less sources — less complexity. Less sources to check — less decisions to make. Less assumptions to keep in mind — less headache.

Less is more 😄

Encapsulate the config logic from the application logic into a reusable module

You are more likely to end with different logic affecting different values when your application modules do any of the following:

  • check for environment variables directly
  • parse command line switches
  • pull config values from remote sources by themselves
encapsulation (img from here)

The application logic should be indifferent to where the configuration values come from. You can do that using IoC or D.I. This means providing modules with configuration as a part of their dependencies, or configuration provider.

The configuration provider provides the application with values from a single consolidated configuration model. The application should not be aware of how this model was composed.

This prevents modules from diverging in their config source logic.

But also, this facilitates testing. You can now inject your modules with whatever configuration you want to test their behavior against. And — you can do that without setting up files, environment variables, or configuration servers.

Configuration is a data-model.

We know to isolate database access from logic using dedicated services. Why is it not so for configurations?

Forbid code files from including any hardcoded config values

If only HMR were our problem... (img from here)

No default values in code whatsoever.

No defaults, no catch-alls, no or fall-backs(|| ‘val’), no ternary ifs (?:), no if-elses, — not when it comes to configurations!
If it is configurable — it must always come as a final value from the outside ⚠️️ If your consts and enums are constructed dynamically from configuration — then yes, them too 🤨

What code files should include is validation assertions. These assertions should cause the process to fail fast as the process starts and before it can join the workload. Should a module require a value in runtime — it must validate it during setup. Validate that the value exists. Validate that it’s legal. Validate that it makes sense with other values.

Any configuration issue that can be detected in start-up — should.

Avoid inferring values from other values.
Example for inferring values:
If NODE_ENV is anything but production then assume a configuration that disables template caching.
The correct way to do that is to have an explicit config value to control that, with a concrete name that implies the connection.

My opinionated hierarchy

Simplicity rules. (img from here)

In the context of cloud services — here are the 3 sources I choose to keep, from weakest to strongest:

1. Use human-readable baked-in files for baked in defaults

Ship with your distribution a base config file that is NOT meant to be edited by the user, but can be used as a reference. If you ship it as an embedded resource — have a --help CLI command that spits it out.

This file should represent the entire configuration model together with any base defaults. It is a single source-of-truth for all the values in the configuration model and their defaults.
I try to include any value that is pulled from config anywhere in the application — including values that do not have a default (in that case, they have a placeholder, and the validations make sure these placeholders are overridden).
This rule is hard to follow when the configuration is made of lists of arbitrary polymorphic items. So, avoid them as much as you can.

I prefer YAML/JSON because they do not allow logic. Since every JSON file is also a yaml file — I prefer to use a yaml parser and support both.

(Y)aml (A)in’t (M)arkup (L)anguage (img from here)

I specifically prefer YAML because it addresses humans first (where JSON addresses machines first). As such, YAML allows comments. It supports cross/circular referencing, and more.

2. Facilitate Environment Variables

Any value that is meant to be “exported” from developer land to ops land — should be mutable using an environment variable that controls it.
This is my sole standard way to provide environment-dependent configuration to the application.

3. CLI Switches

Environment values are correct for all processes that run on the same host. The next step is a CLI switch that when it’s provided — it should be stronger than any environment variable that facilitates the same value.

True, you can pass ad-hoc env variables to a process, however, the use of a CLI switch also communicates that it’s an ad-hoc solution and calls attention to it. It also acts as a “final word” about a value, no questions asked (as long as it’s valid).


All this does mean that:

  1. All application parameters from Ops level come as environment variables. No files involved.
    The world of docker and Kubernetes makes it particularly easy.
  2. Developers mostly use CLI switches to push a single run this way or that way.
  3. I do not support a --config <path> option.
    No files = no--config, plus — CLI switches come last, remember?

It also means that you’d want a clear correlation between CLI switches and the resulting configuration model. If such correlation can be accomplished with environment variables — then even better.

Let me show what I came out with

All the above in 100 lines.

The slingshot channel (img from here)

This is a real-world implementation of a config module for a containerized architecture. It was written as a reusable module in a shared package for a set of services written over Nest.js.

It is also the first TypeScript module I wrote from scratch.

And no. I still do not like TypeScript.

This config-loader starts with loading the baked-in configuration: it extracts the name and version from the package.json in the current work directory. It then reads ./config using require-yml.

It then moves to process env-vars. It expects to find in the baked-in defaults file a flat section called fromEnv. The keys in this section are names of env-vars. Their values are the paths in the configuration model their value should land in.

Since env-vars hold only strings — the values pass through a YAML-parser. YAML is type-aware, and defaults to string for any value it does not recognize.

Then, it parses the CLI switches. The name of the CLI switch is the path on the config model (e.g. --logger.defaultLevel info).
Since CLI switches are always strings — it uses YAML-parser here as well to identify numeric and boolean values. An alternative implementation could be using minimist, but careful! It presents the temptation to introduce custom logic. So I went without.
I did choose to be forgiving and, let the user use a single or double hyphen.

Last — it returns a deep-merge of all sources in the right order.


  • modules that use configuration sections are expected to validate them during set up


  • lists are merged “naively”.
    If your configuration relies a lot on lists — you’ll probably need to provide logic for reconciling lists.
  • the configRootPath is a let for tests purposes only.


Make it work, and WORKABLE. (img from here)

There are many ways to do configuration. Different applications need different trade-offs between simplicity, flexibility, and focus. Figuring out this trade-off is mastering the challenge of configuration logic.



Osher El-Netanany
Israeli Tech Radar

Coding since 99, LARPing since 94, loving since 76. I write fast but read slow, so I learnt to make things simple for me to read later. You’re invited too.