The Inevitab-ilities in Software Architecture

Amit Bhandwale
6 min readFeb 6, 2023

--

In the movie Avengers : Endgame, Thanos the mad titan utters a quote “I am inevitable”. Those 3 words were profound in meaning as well as effect. In saying that he was inevitable, Thanos meant that there was always going to be someone just like him doing the exact same thing, and there was nothing the Avengers or anybody could do to stop that.

Inevitable comes from the Latin word inevitabilities, which means unavoidable. If you say something is inevitable, you give the sense that no matter what scheme you come with to get around it, it’s going to happen sooner or later.

It is said that in life; there are 3 inevitable things — change, taxes & death. Similarly, in the world of software, one comes across many such instances of ‘the inevitable’.

Although I have been working in the field of software for ~ 18 years, it is only over the past few years that I have come to understand the Inevitab-ilities (see what I did there 😋) in it.

Unlike the well-known NFRs / -ilities like usability, maintainability, scalability, availability, security however, one cannot quantify inevitability.

Let’s back up a bit …

For the first decade of my career, I worked in the field of CAx (where x stands for ‘engineering’) software which was desktop-based. That meant big monoliths (not pure monoliths per se; component based engineering was a thing and software was broken down into reusable libraries or components) that had to be redistributed or redeployed every time changes were made. And when I say redistributed, I mean send out CDs for customers to re-install … feeling ancient as I write this 😑

The agile manifesto was published in 2001 but it took 2010 till it started gaining traction. In those days, waterfall models ruled and in the companies I worked at, a single software release cycle took more than a year … sounds sacrilegious doesn’t it 😬

So basically, my skill-set was waterfalled-desktop-based software development that involved data structures, computer graphics and if at all some database work came along, it was a relational database. Architecture was laid out before the development work began and rarely deviated, security was an afterthought, so on and so forth — as can be seen, my view of the software engineering world was pretty myopic, like a frog in a well.

My first exposure to cloud came circa-2012 when my then company started putting all their eggs into this basket. At that time, they also moved their products to a subscription-based model and started delivering updates and new releases via downloads (instead of CDs). But it was not until 2015 that I was introduced to the wonderful world of software architecture and underwent formal training for the same. Hooked, I started reading voraciously and as a result, my knowledge grew multi-fold.

I also got to work on a client-server architecture for the first time and our teams started following the agile-way of doing things. However, the architectures that I proposed and helped built, were pretty stable (yearly release cycles) and I could go to bed at night and wake up the next day feeling pretty confident that there won’t be any rude shocks. So if the design is stable (supposedly) in the long run, what does a software architect do the rest of the time?

Deal with one of the biggest inevitabilities of course — Technical Debt. Technical debt is a concept in software development that reflects the implied cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer. Alex Omeyer explains it really well in his article — https://medium.com/understanding-tech-debt/the-simple-reasons-tech-debt-is-inevitable-ffca7b70a804 which resonated with me since Alex explained it via the Laws of Thermodynamics (my background happens to be Mechanical Engineering).

Let’s jump forward again …

After conquering some of the technical debt in a desktop based client-server app, I made my foray into the cloud and distributed systems domain, circa 2019. It was a watershed moment as far as my professional career goes — overnight I went from desktop, monolith, client-server and C++ to cloud native, distributed, microservices, AngularJS, JavaScript, Java, SpringBoot, AWS.

Unlike the desktop-world where one had to build many of the components ground-up, the cloud-native ecosystem provided a plethora of libraries and proven components which one just integrates into their app or service. Many of these components espouse design-patterns so from an Architect’s standpoint, you are getting Design-Pattern-As-A-Service (DPaaS) … I might be onto something here 😉 The integration part however, is non-trivial as a wrong/improper integration can directly affect your cloud infrastructure costs.

The sheer number of variables make it very difficult to architect for the long term. Let alone a release, there is no guarantee that your architecture will remain valid over successive sprints; primarily because of the agile way of working where you deliver in small increments and new requirements are just around the corner. The end result — dealing with the inevitability that you WILL NEED to Refactor the activity of improving the internal structure or operation of a code or component without changing its external behavior.

If you are building a cloud-native, microservice-based system in an agile manner (Duh !), you have your work cut out — plain and simple. You will start accumulating Technical Debt at an accelerated rate especially if release to market (RTM) timeline is one of your primary KPI. Moreover, if you are not building everything from scratch and are using 3rd party APIs, SDKs, components, modules (let’s call these services) … you end up introducing a hard dependency on the so called service. Now, if the dependent service itself is under development like your own system (happens frequently in my domain wherein organizations build a plethora of internal frameworks), you will encounter situations where you cannot move ahead until the dependencies are resolved. The Architect now has to deal with a different kind of ‘people’ problem.

Also, since RTM timeline is one of your KPIs, you do not have the luxury of waiting for the dependencies to get resolved and hence, need to come up with a “short-term” solution. These situations really test the mettle of an Architect because now, 2 sets of architectures need to be maintained — as designed (long-term) and as implemented (short-term)

It is paramount that the short-term solution you took to resolve a dependency lag is “in-line” with your long term design/strategy — so as to minimize the refactoring delta.

Now you will ask “If refactoring is inevitable, why should I worry about the delta?”.

You should worry because it will be an uphill task, to say the least, in getting management buy-in to overhaul a short-term solution (which works), in favor of your proposed long-term one … when there is no upside with respect to functionality. Once the code for a particular functionality is in, demos have been given and management has moved on to ‘next’ topics, your chances of refactoring diminish fast, unless the short-term solution exposes a NFR flaw e.g. performance impact, compromised usability, security gap etc.

The Architect needs to don another hat, that of an influencer, and secure refactoring bandwidth in every sprint/iteration/release of product development.

Let’s wrap things up …

In an agile, fast-paced environment, the Architect’s job can be akin to a tightrope act. But, whatever the complexity of the problem at hand, first-principles can be your best friend and will rarely take you down the wrong path.

At the end of the day, software architecture is a game of tradeoffs and an Architect’s first answer, to any question, should always be “It depends !

--

--