You can handle complexity. So what?

Do not confuse bragging rights with success

Hajime Yamasaki Vukelic
9 min readDec 8, 2022
Photo by JESHOOTS.COM on Unsplash

The ability to handle complexity is praised and viewed as one of the key performance indicators. In many cases, this ability is evaluated as an important element of the programmer’s performance — be it formally as part of the performance review, or informally in the conversations among peers.

As programmers, our skills — including the ability to handle complexity — are mostly powered by abstract thought. The more of it you can do, the better. Ability to reason about highly abstract concepts like algorithms and data structures, solving puzzles on various puzzle-solving sites, understanding and application of complex concepts like OOP, DDD, TTD, design patterns, monads etc., are commonly seen as signs of superior intellect. Some may not even think those are complex — that’s how intelligent they are. For the most part, this perception is entirely valid. It does take superior intellect to understand and correctly apply these concepts.

In this article I discuss the role of this ability in the grand scheme of things, and discuss another ability that is equally, if not more, important but nowhere near as popular.

The two I’s

There are two kinds of complexity that programmers regularly deal with. I like to call them the “two I’s”.

First, there’s the inherent complexity of the problem we are solving. The problem ranges from “change the background of this button to red when cursor is over it” to something like “predict the next word the user is going to type based on their chat history”. The solution to a problem cannot have less complexity than the problem requires it to. Another name for the inherent complexity is “not my fault”. There normally isn’t much we can do about it, and it’s part of the work we do. It is what it is.

Secondly, there’s the incidental complexity resulting from the tools we are employing in order to implement the solution. The suboptimal choice of tools and methods, insistence on process and methodologies that are not a good fit for the project, insistence on the appearance of the source code (rather than the final product of its execution), bad habits, incorrect perception of complexity, succumbing to fear of undesirable outcomes or appearing inadequate to peers, managers, or community members (a.k.a., FDD, fear-driven development)… these are some of the many ways in which we increase incidental complexity. Another name for this is “it’s my fault”. This is the area where the way we work and think has the most impact.

There’s a grey area between the two I’s which I like to call intermediate complexity. It’s not really a different type of complexity. I’m just making things more complex without a good reason, you see. Joking aside, the grey area is the incorrectly specified problem that turns into incidental complexity. This commonly occurs when the problem is a proxy problem — not really a problem but a solution formulated as a problem. For example, the “problem” may be presented as “User wants the file to download within 2s” even though the real problem is “User does not know what is going on so they repeatedly click the Download button causing unnecessary load on the backend”. The requested solution might be 10x more expensive than, say, adding a spinner to indicate busy status and/or disabling the Download button. This type of process error introduces incidental complexity that is still “not my fault”. Depending on your role on the team, you may or may not be able to get this sorted out with the responsible team members (product managers, designers) and get rid of the extra complexity.

The complexity challenge

The ability to deal with the inherent complexity — understand requirements and formulate appropriate solutions — is essential. Since solutions cannot be made simpler than the problem demands, there’s a fixed minimum amount of complexity that a programmer needs to be able to handle before they can get to work. This is non-negotiable.

While the inherent complexity is a fixed characteristic of the problem at hand, there’s no limit to how much we can increase incidental complexity. We can throw more tools at the solution, we can introduce more complex methodologies, etc. One could even make things complex by accident, or by simply following conventions and (best) practices prescribed by our peers. It’s not a question of how complex we can make it, but how well we can cope with it — our intelligence.

In an ideal world, the inherent complexity would be the final complexity of the solution. This is only possible if we implement a solution with zero incidental complexity. While zero is not quite possible, it’s a worthwhile goal to set. Reduction in complexity has tangible benefits. It reduces the amount of work, amount of total code involved in the implementation (including libraries, not just our code), amount of production bugs (just because more predictable code and less of it means less opportunity for bugs), amount of time required to get familiar with the solution, etc.

It may sound like a good idea, and I think it is, but generally speaking, it’s not what most programmers aim for. The amount of complexity we can deal with appears to be a good proxy measure of our intelligence. It is also part of reward schemes at many companies. Therefore, there’s plenty of incentive to increase incidental complexity. For the most part, this is what I see programmers do.

Simpler solutions are boring and they do not result in extra bragging rights. “I came up with an algorithm to capitalize each word in a sentence using React and TypeScript with full test coverage” sounds “cooler” than “I used text-transform: capitalize”. Although it is obvious that the latter solution has far less incidental complexity, therefore keeping the overall complexity of the solution down, programmers are allured by the opportunity to test their mettle against the more complex approach.

The other challenge

Challenging oneself with complexity is a good mental exercise. When it comes to actual solutions, however, there’s a far more worthwhile challenge — simplifying the solution. I consider the ability to simplify our solution to be the ultimate skill and the ultimate test of the programmer’s ability.

This challenge has a name. It’s called KISS, or “Keep it simple, stupid”.

The basic idea behind KISS is to default to the simplest solution that actually works. It may sound easy, but it’s a lot harder than one would initially think, and I’ve seen it misapplied more often than not.

Let’s first address the “default” part. The idea is that starting from the simplest possible solution and then expanding it later is a lot more efficient than coming up with an arbitrarily complex solution and then simplifying it when it no longer works.

We may also learn things as we work on a solution and then find ourselves needing to go back to a simpler solution. Over the course of our career — as long as we are paying attention along the way — we become able to tell that there’s too much code for any given problem. Does 152 lines of code sound too much to perform input validation on the only text field in the application whose only constraint is that it must be filled in? How many lines does it sound more appropriate? 10? 1?

There are two additional constraints: “simplest” and “works”.

“Simplest” means that we are dealing with a range of solutions of which we need to pick the one that has the least incidental complexity. By extension, the effectiveness of our choice hinges on our experience — we have to be familiar with the complexity characteristics of various solutions in order to make that choice.

The “works” part is also frequently overlooked. I’ve seen programmers that dismiss KISS because they believe the “simplest” solution will result in code that does not satisfy the requirements. That’s not KISS, that’s just S (I’ll let you decide which of the two S’s we’re talking about). In order to qualify as KISS, it needs to work — solve the problem completely and satisfactorily. The KISS part is where the solution solves the problem completely and satisfactorily but nothing more than that. We do not attach additional constraints that are not integral to the solution, like adherence to arbitrary coding standards, enforcement of some methodology, personal taste — I know we do that all the time, but believe you me, it’s only to our detriment.

Under the rug

Consider the following chart.

A chart showing two solutions with identical overall complexity with different ratios of programmer’s own code and third party code

Both solutions A and B have the same total complexity. Solution A contains more library code, while the solution B has more programmer-written code.

One way to look at this is “Yay, I had to write less code!” That’s a valid, and obvious conclusion.

Another way to look at it is that writing less of my own code did not reduce the overall complexity of the solution. This is an important take-away that is critically overlooked today. It merely means that we are writing less code, but it ultimately does not mean that we arrive at the solution faster, nor that we are making anything simpler. In other words, a lot of the complexity got kicked under the proverbial rug.

From the chart, we can also make another non-obvious observation. The portion of the code that is solution-specific is likely higher in the solution B. Libraries are typically generic solutions to a class of problems, not the fits-like-a-glove solution to the specific problem we are solving. This means that the libraries usually come with lots of execution paths that are unused in our programs, along with any number of mechanisms for selecting those paths that get executed every time.

Based on the chart, we could argue that the solution B probably does more than solution A with about the same amount of complexity. It could mean that solution B is overengineered, or that solution A is incomplete. Either way, there’s very likely a functional difference between the two solutions. In either case, solution B is more efficient (it gives you more bang per unit complexity).

The libraries’ I’s

We also need to be aware that using any library implies at least some incidental complexity. These includes (and are by no means limited to) the extra complexity associated with the need to:

  • Fetch and install a library
  • The extra code to import the library and make its functionality available to the application
  • Adapt our code to the library (especially if the library is a framework)
  • Perform regular security audits
  • Work around cases where the library does not quite do what we need

In theory, the more library code we involve, the less of our own code is dealing with incidental complexity. In practice, it’s more of the opposite.

This applies to library- or framework-like code that we wrote as part of the application code. Our code ceases to be solution-specific for various reasons. We may be trying to add more options and configuration to “future-proof” it. We may be prematurely eliminating repetition. We may be enforcing some abstract idea about how we should code, or defaulting to some complex pattern as a matter of habit (e.g., inversion of control) rather than solving a real problem. We may be going as far as to make every little bit of code a shared component of the system (e.g., in-house libraries, monorepos) incurring tremendous amounts of incidental complexity. These all fall under the umbrella of YAGNI and can be minimized with a more conservative, need-based approach.

Does this mean we should never use libraries? Of course not. It just means that we need to be aware of the additional complexities we are dragging into the solution and think about the tradeoffs. There are libraries that solve problems that are trivially solved without them as long as we are prepared to learn the language features and mechanisms involved (e.g., manipulate strings and dates, for example), and there are libraries that would realistically take years of work to develop (e.g., display and play back musical notation).

The level of complexity introduced by libraries is dependent on the inherent and incidental complexity of the solutions they implement, just like our own solutions. Everything we discuss in this article recursively applies to the libraries, including the downsides of any choices made during their design.

At the end of the day, our real goal is to keep the overall complexity of the solution in check and keep our software manageable, not merely write less code.

--

--

Hajime Yamasaki Vukelic

Helping build an inclusive and accessible web. Web developer and writer. Sometimes annoying, but mostly just looking to share knowledge.