Architectural Misconceptions — Future Proofing
When I was in my second year of uni in 1996 I bought a PC. I don’t think it was a brand name, but it cost me around £1000 and had a Intel Pentium 75 processor. A 75Mhz chip. It also had a massive 1Gb of hard drive space and a built in dial up modem. It was running the latest Windows OS, Windows 95. I forget how much RAM. It was state of the art….I’m not even being sarcastic. And the first thing I did after setting it up was to install PGA Golf, which felt like I was controlling an actual human being. Of course, in those days, games like that had to be installed and run in MS-DOS. It was brilliant.
A couple of years later, I ran out of disc and decided to upgrade it. I bought a hard drive from somewhere and installed it into the machine. I think it was a 500Mb hard drive. Then, some years after that, I upgraded the RAM. I think I upgraded the processor as well. When DVDs became popular I replaced the CD player with a DVD player. That desktop Windows 95 machine lasted me around 5 years. And I only replaced it then because I was working and had some money.
In 2009 I bought my first Android smartphone, the HTC Hero. A 3.2inch screen of goodness. It had all the latest tech of the time. It had features I didn’t need yet, but I knew those features would one day be useful. I replaced that a year later with the HTC Desire……sweet phone that was ahead of it’s time, with a 3.8 inch screen. Two years after that, came another and then a year later another and then another.
In software, people talk a lot about “Future Proofing” of software. It’s become a key indicator for quality. If your software is not future proofed then you have cost your clients, customers, organisation way more money than was necessary.
Designers and architects work massively in future proofing their applications. But I often come across a misunderstanding of what the term means. And as you’ll see, I don’t even think the term should be used.
Future Proofing is development of features now that we believe are going to be needed in the future. By doing this we reduce costs by having existing resources produce all features now.
What we really want
Software should be designed and developed in a way that it can be extended with additional features, technology and systems in the future to fulfil requirements as they are needed and not before. Our software should be extensible.
My story above about the PCs and smartphones is, for me, an illustration of the difference between what is meant by “future proofing” and what is “extensibility”.
PCs in the 1990s did very little. They were limited, but they did what was needed at that time. They didn’t include built-in webcams as there was little point at that time; the internet was too slow for video calling. They didn’t include a microphone or speakers. They did what they knew their customers definitely wanted, thereby keeping costs down (or as low as possible)……but they were able to be extended as needed. They had ports and swappable (plug and play) parts. You can get a bigger monitor if the one it came with was no longer useful. As the internet improved you could plugin a webcam. This meant my pretty decent computer from 1995 was still pretty decent in 2000 even though it was different.
These days, smartphones are feature packed. Some of those features will never be useful or needed (think 3D screens, projector), other features may one day have become useful but by the time they were, the rest of the phone is obsolete (I’m thinking fingerprint sensors 3 or 4 years back). They will be described a future proofed as they have the latest technology. And yet, everything is locked down. I cannot change the processor or the RAM. I can sometimes (though more rarely these days) increase the storage space with an SD card. But that’s about it. I know Google have Project Ara which addresses these but that is the only one. And so it means I will need to, if I want to keep up, buy a new phone every year or two. They are so very very expensive because of all the features that are added, some of which are never used.
This is not a criticism of the smartphone industry. They may be important business and user benefits to working this way. There may be little choice as a phone is a limited package. Project Ara by Google isn’t a runaway commercial success, so maybe people aren’t ready.
The main point is to simply illustrate what is future proofing and what is extensibility and why, from a software perspective, we want to encourage extensibility.
When delivering your software, it should be the aim of all architects and software professionals to produce software that fulfils the requirements at hand as cheaply as possible, without delivering additional features in an attempt to pre-empt future requirements but ensuring that their software can be changed or upgraded with minimal cost in the future when there are new business or user needs.
Why avoid additional features even if you know for sure they’ll be needed one day?
All features add cost. The cost of analysis, coding, testing, support, maintenance and complexity. Even the smallest feature will have an impact. ROI of those features will not be recognised until the point they are needed, which effectively increases the cost even more. But even more concerning is that it will be based on potential future requirements in the context of today. What we know now and what the world is like in a year’s time (or even months time) when this requirement may be needed are two different things. Things change and businesses change.
This sort of thinking is very much part of the Agile/XP way of thinking. The following is from the book Learning Agile by Andrew Stellman and Jennifer Greene.
Ron Jeffries, who worked with Kent Beck to co-found XP, described a simple way to avoid the framework trap: “Always implement things when you actually need them, never when you just foresee that you need them.” Some XP teams like to use the acronym YAGNI, or “You Ain’t Gonna Need It,” when they talk about this.
This was talking about a problem called the framework trap (which I’ll leave you to read the book for) but the principle is the same as Future Proofing. Adding features that we believe we’ll need in the future to save time later, when you may never know that it will be needed.
As architects, we must design according to what we know now (make design decisions at the last responsible moment), but design our systems in a way that we minimise impact of change.
There are a number of principles that should be followed to aid in extensibility.
Creating your components that abstract clients from the underlying implementation. For example, when creating a decisioning component that returns the next best decision, create your interface so that the consumer doesn’t care whether your component is using a decisioning engine, rules engine, regression equations or a bunch of if statements (though code wise the last option should be avoid for complex logic). This provides an opportunity to extend the capabilities of your component by adding more data into your decisions, or changing from if statements to a rules engine. Your consumer doesn’t care and is not affected.
Software being as simple as possible in a way that still achieves the requisite functionality. So before including queuing or caching or calling multiple external systems, think if they are really needed. Less complexity, easier to change.
Single Responsibility Principle
This is one of my favourite principles, one that was “named” or “defined” by Uncle Bob. There are plenty of resources that go into a fair bit of detail on it. I would Google “Uncle Bob Single Responsibility Principle”. In essence, this principle is:
Each software component should have one reason to change
A clear sign of this being broken is when a component has a name like “Address And Bank Details Lookup”. Many components, each doing one thing and one thing only, is far better for extensibility than one component doing many things.
How does this help extensibility? It’s actually another aspect of simplification. By having many components doing small tasks you can easily make changes to how those tasks work and know the impact is small and localised — as long as you have abstracted adequately. You can also add components in fairly easily as and when needed. And by having a component having one reason to change only, unit testing that thing is a much simpler task.
Test Driven Development
This is not an area of expertise for me, as when I started coding writing unit tests was considered the realms of hippies. In fact, if you’d seen the code I inherited on my first ever coding project you’d have thought testing was a no no at all.
However, it’s something I have embraced as I have seen the benefits to my own code. And in the context to extensibility it gives the freedom to extend your software in the knowledge that you won’t break how something currently works. It’s an excellent technique and I’m disappointed to see this have been adopted.
Finally, I’m going to borrow from Uncle Bob (Robert C. Martin) again. Order his book on Clean Code now, read it, learn it, understand it. Clean Code will take you a long way into ensuring your software is extensible. It’ll be readable, understandable, properly structured and refactorable. You want to do for your code what your architectural principles does for your architecture.
Future Proofing as a term is fairly innocuous, seemingly correct in it’s meaning. After all, it’s arguable that future proofing and extensible have essentially the same meaning. Linguistically, this is hard to argue against. But in the context of software development, the intention behind the phrase is far from innocuous and can (and has had) harmful consequences. Delayed delivery, buggy code, under-performing software and a maintenance hell. Avoid it with everything you have.
One last thing to note — at some point all software stops becoming extensible. This is a natural limitation as technologies become more enhanced. Just like at some point your 10 year old PC can no longer accept certain parts that would make it “modern”. But you should not be having a “technology refresh” every 5 years — mainly because most tech refreshes take 5 years — and that what you deliver today can still be built upon in 5/6 years time without wholesale changes at a time.