Photo by Fleur Treurniet on Unsplash (Edited)

Complexity costs productivity

A technical analysis of complexity; maximising developer productivity by maximising “reasonability” of the application.

Andrew Howden
Published in
10 min readJan 27, 2019

--

As software developers our job is to take some arbitrary human process and express that in a way that computers can take, process, and produce results that are reasonable for humans to understand.

There are almost an infinite number of processes and the desire to express them in terms of computing is ever growing:

  • The purchasing and shipping of goods (my speciality)
  • Expressing logistics information
  • Allowing people to leave text messages for each other
  • Writing medium articles

And so on. Software processes are a reflection of real world processes; a sort of ephemeral shadow that follows real world actions around, dipping in and out of the real world through users.

Software is a russian doll

Complexity is defined as:

Complexity characterises the behaviour of a system or model whose components interact in multiple ways and follow local rules, meaning there is no reasonable higher instruction to define the various possible interactions.[1]
https://en.wikipedia.org/wiki/Complexity

Given this is a discussion about the costs of software complexity, it’s worth unpacking what complexity means in in terms of software.

It’s perhaps best to reason about software as a sort of “russian doll” — a nested set of layers, each of which contains another layer beneath it.

The difference between russian dolls and software is that each russian doll is approximately the same as those before it; it may differ slightly, but it bears the same shape and approximation. Software, by comparison, sits in an uneven set of dolls, each doll requiring a different shape for the dolls that sit inside it.

Let’s unpack this with a real world example: An eCommerce store. It’s simple enough to think of a store as a magic black box that does ${STUFF}, and then later ${THINGS} turn up at your house:

A magic black box that does things

As we start to think about it a little more we can work out there’s at least two bits:

A rough network diagram of the shop

However, it gets complicated on both sides of the above equation. On the left, the browser might be any one of a number of popular browsers such as Chrome, Firefox, Safari, Internet Explorer or Microsoft Edge. Additionally, the “shop” is actually 4 or 5 different applications all working together:

A network diagram including the various shop components

Each of those components has one of ${N} possible implementations. The webserver for example could be “apache”, “NGINX”, “SimpleHTTPServer” etc. Additionally, the operating systems themselves could be different; Mac, Windows or OSX. And so it goes for each component of the above diagram.

Each of the components is a composite of various “libraries” that an innumerable amount of developers have written over the years, wired together to produce some sort of application. The “interpreter” component serves as a good example, as it’s the part that people generally think of as the “software”:

Interpreter → PHP Application

Lastly, an application does not operate in isolation, but rather “on top” of a whole swathe of other applications

Approximate path through the computing stack for a request

To steal inspiration from Douglas Adams famous Hitchhikers Guide book:

Software is complex. Really complex. You just won’t believe how vastly, hugely, mind-bogglingly complex it is. I mean, you may think it’s hard to work out your breakfast order, but that’s just peanuts to software.

Addressing Complexity

As much as we’d like to delude ourselves, Software developers are not magical beings somehow able to make sense of this absurd complexity. Instead, we work in “abstractions” or “layers”.

An example: The OSI Model

A famous model is the OSI network model. When a computer talks from one computer to the other it speaks in electrical signals; specifically by varying voltage between two defined voltages representing binary. However, we as developers never thinking about how to encode this data on the wire; indeed, I don’t even know how it works really.

Instead, we use a model someone else has built for us and only think about that model. In the OSI model there are 7 layers:

  • Physical: The aforementioned voltage magic. Wifi, Ethernet
  • Data Link: The reliable transmission of data frames between machines. IEEE 802.11, IEEE 802.3
  • Network: Structuring and addressing a multi-node network. ipv4, ipv6
  • Transport: Reliable transmission of data segments between points on a network. TCP, UDP
  • Session: Managing continuous exchange of information in the form of multiple back-and-forth transmissions between two nodes. Apple Talk
  • Presentation: Translation of data between a networking service and an application. TLS, SSL
  • Application. Resource sharing, remote file access. HTTP, HTTP/2, gRPC

The vast majority of the time we, at Sitewards, spend our time in the application layer. It’s all we care about — anything below that is essentially magic.

This allows us to write reliable software that works over WiFi, Ethernet, Satellite, Pidgeon etc.

All things are abstractions

This breaking of things down into abstractions is an extremely common process in software development:

  • Within the application we’re defining Object Oriented APIs that allow interchangeable software objects.
  • Within the operating system we’re adhering to standard operating system interfaces
  • Within the network we’re adhering to defined network standards
  • Within distributed systems we affix specific behaviour such as exponential backoff, idempotent operations etc

This allows us to make certain assumptions about our systems; to swap out components are desired. Practically this looks like:

  • Adding a new payment provide by implementing them against a payment interface
  • Adding a new shipping method by implementing them against a shipping interface
  • Shipping our new software on a new operating system by staying within the software APIs, and not calling out to system APIs
  • Adding an entirely new client, such as a progressive web app or mobile app to our web service
  • Adding a new feature without adding any risk to existing features, nor any risk of downtime to the additional business

If we stick to a set of defined patterns while defining our software we can dramatically reduce the amount we have to consider as we implement each change.

The less we have to consider, the faster we can implement our code and the less we have to plan for interactions with other code.

Identifying abstractions

So, given that there are so many patterns out there to solve our respective software problems, it should be trivial to ensure we stick to those patterns and make sure our software is always cheap, resilient to change an easy to reason about! However, practically, it’s not that simple.

Generally speaking all software we build is new; it solves a different problem than that which has been solved previously, or it solves it in a new way. That means that the problem isn’t really clear yet, and nor will it be for several attempts to “solve” it with software.

When writing code, there’s a choice that we have to make — how much to “abstract” the code into predictable chunks. Too much abstraction and the abstraction provides little value; the application hasn’t actually reduced complexity — it’s still too expensive to reason about. Too little abstraction and it’s impossible to reason about how our changes will impact other parts of the code base — changes will propagate through something in an unexpected way, breaking how that works.

There are some good techniques to help identify what abstractions should exist such as Domain Driven Design, but for me at least, only hindsight can point out where the application is likely to change.

Accordingly, I will do a limited amount of design up front but rather ship a minimal version of the application to users, and see how they use it (“minimum viable product”). Users rapidly indicate where the fault lines in the software are, and software can be fairly easily rewritten.

This sucks as it makes it hard to understand what software is going to cost what amount. However, there’s little to be done about this as such guarantees would mean good models have been found already and this software does not need to be written, and if there are no good models, no amount of planning is going to identify them at the same rate customers will.

Technical Debt

Technical debt can be defined by:

Technical debt (also known as design debt[1] or code debt) is a concept in software development that reflects the implied cost of additional rework caused by choosing an easy solution now instead of using a better approach that would take longer.[2]
https://en.wikipedia.org/wiki/Technical_debt

In my experience “technical debt” is broadly the violation of patterns otherwise established in the codebase to make the application simpler, or easier to “reason” about.

Technical debt can be deliberately introduced, such as modifying the code in a way that connections systems traditionally not connected to make a feature work on a short deadline. More concretely, to synchronize two system databases nightly so they each see the “same” data, though in principle there should only be one authoritative store of data.

However, technical debt can also be “discovered” in the sense that patterns that are obvious in retrospect were not implemented while work was being completed.

In either case, technical debt represents an unpredictable software application. Because developers needing to make changes to a codebase first need to understand a codebase, and because this debt makes the codebase behave in a way that makes it less predictable the costs of change to this area of the codebase are significantly higher than in other, simpler areas of the codebase.

In this case the solution is fairly simple: If the area is encountered rarely, simply accept the technical debt and move on. However, if the area will be continually read, analysed for security issues or modified in its functionality, modify the code such that it’s simpler to read and understand, and then add the additional functionality afterwards.

This is most commonly referred to as “Refactoring”.

Keeping things simple

Complexity increases the cost of development directly, as well as increasing the risk of budget slips, bugs and other negative consequences for the ongoing health of a software project.

Accordingly, it’s worth reducing complexity as much as possible. There are various ways to do this:

Retain staff

Paraphrasing our earlier definition of complexity, the goal is to be able to make higher level predictions given a set of lower level components — or, to be able to think of a thing as a “black box”, that given a set of inputs will return a set of outputs.

Those who have more experience within a given codebase are able to make those predictions quickly and easily, based on their experience of it and their understanding of the lower level components.

In essence, they become the “black box” that is easy to reason about.

This is a high risk strategy. Though we all love our colleagues and wish they would remain with us forever, invariably people get promoted, fired or move companies. Accordingly, while staff retention can put off the cost of making systems simpler, those costs will eventually need to be paid.

Share reading material among the team

Our ability to reason about software works directly as a result of experiences of software until this point. We can acquire new experience by reading the experiences of others, encoded as books, blogs, tutorials etc.

With shared mental models of how the code works we can write and implement changes that each other can understand, predict and build on top of.

Follow “standards”

Given that our software industry is now approximately 90 years old, a whole series of things that we would wish to solve have already been sold before us. If we’re lucky, such wisdom has been written down and widely adopted, and documents exist that exactly specify how a given system should behave.

This works for the somewhat obvious things like:

  • HTTP
  • TCP
  • gRPC
  • REST

However, there are a number of less widely adopted but still extremely valuable pseudo-standards.

If no standard exists for a problem that you find yourself solving regularly, define one and write it up. At worst, you’ll start encouraging others to follow the same pattern and our collective complexity burden will go down, and at best someone will correct you and point you to the standard.

Invent less

In a similar vein to the standards approach, our particular insight into a problem is not likely a unique one. If possible, we should keep our domain expertise to an absolute minimum, solving only the issue that we specialise in.

This means adopting the solutions of others, and customizing them only slightly for our own implementation. Things like:

  • Adopting a framework for ones application
  • Adopting a library to solve a given problem
  • Adopting a process that’s proven successful for previous teams

While those things may appear nonsensical at first, often there is a tremendous amount of wisdom in them that isn’t readily recognizable until we’ve spent time in the problem space.

As an aside, it’s worth finding open source solutions if possible. Though there may be rare exception, commercial entities exists to make a profit, which means fundamentally pleasing customers. Those customers may not known quite what they want, and unless there is strong product development leadership, the company is likely to fall for the (initially) more profitable “the customer is always right” model. Open source, as an alternative, doesn’t usually have a customer — it exists to solve a problem, and its success is much more derivative of how well it solves a problem.

In Conclusion

The costs of software complexity is a nebulous topic; one that’s hard to pin down and articulate. However, hopefully this has provided some perspective on how systems that are successful have been built; the push to make them less complex, more reasonable and ultimately, simpler.

It’s worth starting the conversation with your team about how future projects will be implemented. If the answer to: “Can this be implemented in a simpler way” is “Yes”, poke hard at why such a simpler solution is not the best.

--

--