Why Configurability Always Goes Too Far

Eugene Marin
The Startup
Published in
7 min readSep 29, 2020

Well, sure not always. Fine, this might be a misleading title. However, it might be useful to have the mindset of “always”, so that we always ask the right questions at the right time.

The point is this: These days when we, sometimes experienced engineers, design a system, a feature etc., we’re often so consumed with abiding by all the principles in the book, be it scalability, reuse, configurability and whatnot, that it’s easy to forget that all these amazing things come at a cost. Of course a lot of times this cost is worth paying, that’s why we’re so trained to think in these terms, but a blind adoption of them does, very often, go too far, and becomes counterproductive.

Let’s explore how configurability, an amazing word indeed, can come back to bite us, and from my experience it does, almost every time. And, I’ll attempt to explain why, and what to do about it.

TL;DR: It often has something to do with an unwillingness to make a decision and take responsibility for it, which sometimes we must do. Why should you decide about a certain value or logic, if you could leave it to someone else? Because even if it gets to someone else, they’re often in no better position than you to make that decision. So, if you’re in that position, just make it yourself.

Of course we always need lots of configuration. However, we often lose the sense for when we can just make a decision and go with it, even if it’s much more “limiting” on paper. To my estimation, the configurability nightmare usually starts with one of two things: constantphobia, or the “here, you do it” approach.

Constantphobia

Constantphobia — a condition widespread among software developers, constituting an uncontrollable fear of hard coded values (constants). — Nonexistent dictionary.

Consider the following example: You’re working on an optimization of a background migration process, that takes a bit too long. The process involves compressing a list of data files into an archive, and sending it elsewhere. Now, you’ve come to realize that the compressing is what takes most of the time, so you decide to parallelize it: Let it write 8 archives in round robin, then unify them. Testing proves a great improvement. But now you’re wondering, of course, why 8? Won’t we want to change it some time later, or in some environments? Maybe it should be, like, configurable? This is also something your reviewer might ask…

Now wait. The question you need to ask yourself at that moment, that a lot of engineers won’t, is this: Who is going to change that value, in what scenario, to what value(s), and why? Well, 4 questions. If you can’t answer them and determine it’s worth it, generally speaking, keep it constant. Now, if you agree, you can skip to “Here, you do it.

Here’s where it gets a bit controversial: A lot of engineers will say that even if you can’t answer these questions right now, you should put this value in the configuration, just in case. “You can’t regret this, but you might regret the opposite,” they’ll add, meaning you’ll have to recompile to fix this, should you be required to. In fact, no one knows if you’ll regret that configuration key. It takes lots of experience, accounting for specific factors and some intuition. But, you might indeed regret it very much. Allow me to demonstrate:

  1. “Who is going to change that value?” In this case and many others, the answer is most likely “me, maybe.” What it really means: Nobody. See all these configuration values nobody understands and never ever touches? They have very similar stories: Unnecessary, undocumented (or not well enough documented), abstract values added by people who are no longer around, complicating the code and configuration files and slowing down upgrades and initialization.
  2. “In what scenario?” Assuming you do want to change it, it’s probably because it doesn’t work, meaning your tests were not great (and you don’t have any automation on that). It’s often an issue of lack of confidence in the code, which could be achieved with a good enough automation. Now fine, we don’t have that now, but then,
  3. “To what value, and why?” In this case there seem to be two logical options:
    a. Change it to 1 to disable the parallel compression. In other words, we treat it as a feature toggle. Great idea, except we could’ve done this toggle in the first place. Are we going to define that we only mean to use either the default value, or the magic value 1 like some Easter egg? Why?
    b. Increase the value, because it’s not fast enough. But are we going to do that? With a background process most people don’t even know exists? And how fast is fast enough? And, more importantly, is it going to work, and work well? Can it fail or hang if for example we put 10000 in there? Oh, right, we should have a minimum and maximum. Let me guess, they’ll both be constant? Congrats, now we’re adding both constants and the complexity of configuration!

But wait, there’s more! What about upgrades? Do we override it if it was changed by us and by the customer? What about backward compatibility if we move/remove it? Does it stay there as garbage (as many configuration keys do) or do we “migrate” it? And so on.

Sometimes “good enough” is the best. Constants are simple and internal, so if possible, save yourself and others the headache of excessive configurability. If for any reason you’re not confident enough with your new functionality, consider a feature toggle. But spare others from decisions they’re not necessarily capable of making.

Here, you do it

This talks about “code” configurations, like plugins and integration hooks. Usually the more complex the system, the more popular this “advanced” type of configurability becomes.

The “here, you do it” approach comes from a good place. That is, the realization that the customer, existing in the real world and not in a presentation, knows his needs better. So what do we do? We allow them to build their own solution. Sure they’ll construct the system of their dreams! We’ll call it a “highly flexible system” or an “entirely customizable flow,” and provide them with a bunch of scripts and plugin APIs, to modify or enhance their flow’s logic, build integration layers and gateways executing commands, tweaking fields, moving data in mysterious ways. Customize. Adapt. Overcomplicate.

Needless to say, usually it doesn’t turn out exactly like the presentation, but rather like the real world:

  1. Learning that the field engineers will do things our engineers wouldn’t even imagine someone would think about doing, we’ll forbid them from touching anything without our “supervision” (that’s “ever”).
  2. We’ll provide them with ready implementations that we prepared for the common uses of the framework, tweaked for individual cases. And we’ll do it much faster anyway.
  3. We’ll learn to avoid certain “highly sensitive” configurations altogether — a good sign nobody needed them in the first place. One day we’ll get rid of them. But not me. And not this year. At least not while this one important customer relies on it in some strange way.
  4. Some customers will continue doing “forbidden” modifications and very creative misuses, so we’ll have to essentially debug their problems, and stabilize the system until the next time they break it (not remembering how, obviously).

In a way we’re dealing with the same belief, that there’s no such thing as too much configurability. This assumption makes us overlook the fact that unlike adding a configuration, which is easy, maintaining it can be hard, and getting rid of it might be impossible.

A common problem with plugin frameworks and configurable integration layers, is that they must be very strict as to what is possible, what’s legal and what’s potentially dangerous. They must be simple, but support enormous flexibility. Their errors must be clear, they must provide hints, validations, debugging or at least monitoring options. The system must support almost any mistake imaginable that the user might make inside the framework, and at least accurately report it. And what about performance? Security? And again, upgrades? Compatibility? It’s very difficult to get all this right.

In an ideal world, an SDK, let’s call it, must come with an IDE, let’s call it, especially when delivered to a customer that you have to support later on. But unfortunately, it’s often not an option in any real form. So, the general principles regarding this type of configurability would be:

  1. Avoid them if possible. If you really have common use cases, they should be regular, sealed and ready features!
  2. If absolutely required, use a 3rd party tool.
  3. If inhouse development is absolutely required, define the APIs carefully. It should be exactly what’s needed. Not more!
  4. Support one way to do things, as much as possible.
  5. Do the best you can to answer the requirements mentioned above (restrictions, validations, monitoring…), and if possible, provide some form of an IDE.
  6. Prepare a strategy for gradual deployment and support. Believe me, no misuse is off the table.

Conclusion

You’ve made it! Let me summarize by saying that I sure realize that there’s the other extreme. I’m not in favor of putting IPs in constants, or abolishing all configurable integration layers and plugin SDKs. I’m just pointing out that it’s important to understand that getting carried away with these things, or getting it wrong, will cost you and others a lot in the long run. It’s worth keeping in mind.

Some related links

--

--

Eugene Marin
The Startup

Software developer, in a relentless search for better ways.