Future proofing your Software Solutions

How to avoid developing “LEGACY” software

Throughout my development and consultancy career, I have come across a lot of “Legacy” software. Anytime I get a chance to work on such project, I try to imagine what went on in order for it to be in the state it currently is in.

What is Legacy Software

To me, Legacy Software is software or systems that has one or more of these characteristics:

  • The software is mission critical to your business
  • Dependent on vendors that are no longer in business (or no longer support the product)
  • Dependent on hardware that is no longer supported
  • Dependent on a specific operating system (possibly passed its shelf life)

And I would even go ahead and add the following when it comes to the software specifics

  • It’s surpassed the development capabilities of the average developer (programming language is no longer in fashion, and no longer supported)
  • It is not able to withstand the new changes the company has undertaken
  • The original creator is the only one(s) supporting and maintaining the system

Who has Legacy Software

Legacy Software can be found anywhere. From power stations, energy companies, nuclear plants, in manufacturing, banks and other financial firms, the defense industry, hospitals and more. You potentially have some legacy software or systems within your company.

Some of these systems are bigger than others. Which is another factor to take into account when calculating the risk you undertake.

As developers, what can we do now to future proof our system in the future?

In the world of Clouds

Just because we are no longer working with physical servers to be tied in to, doesn’t mean we are immune to the same problems these legacy systems suffer(ed) from. Working on the cloud, could still put your company in a similar situation in the future.

Having your software in the cloud automatically ties you in to a vendor, the supplier of the cloud platform.

Deployment strategies such as containers could help in mitigate this problem as you are not tied in to a single cloud provider for deployment of your software and could easily change providers.

You will have to be wary though, most of these cloud providers will provider additional services that would tie you in to their eco-systems. Examples of these are Azure Functions/AWS Lambdas. These are specific services with specific APIs provided by the cloud providers. Using these, you are taking a risk. You are betting on that cloud provider’s success in the future and their support of this feature in order for your software to stay functional.

There are many questions you should be asking yourself (and the development team) when you are considering these options.

I will try to go through some key pointers that could mitigate these risks and still allow you to take advantage of all of these offerings and benefit a cloud providers provide you whilst still protecting your software for the future.

What can we do to future proof our software

Here is my take on what I believe should future proof your software. I will be outlining pointers and some questions you should be asking yourself in order to determine the risk you will be taking, and combing these, you will be able to mitigate the risk and future proof your software.

Note: all of these come from my personal experience, if there is anything I have missed out or you disagree with, please let me know.

Work with established patterns

I’m going to start off with the small and obvious. Working with established patterns when in it comes to the development and architecture of your software. This gives you the benefit of extensive support and knowledge base you could get online. You will also get the benefit of having established communities around these patterns which could make for support in the future easier.

Even if the patterns are no longer in use 30/40 years from now, the fact your code is following a pattern could make it easier for the future developers to learn and understand the code base.

Test suite covering your business cases

This goes without saying. Your software should be tested. Unit testing to make sure you have a good code coverage and that the code works as expected. You would want your integration tests for larger interconnected system, but what I believe the most important one of them all, tests for covering your business cases.

People come and go. Both in the development side of the software and the business side. Those people that set out to build the software will not be there for the lifetime of the software, so the software has to be self sustainable.

Test suites covering business cases would mean that support in the future will ensure that the software will function as should. Also, any changes that conflict with the original design, will be documented through changes of tests, which will extend the lifetime of your software.

Frameworks and Tools Due Diligence

There are thousands if not tens of thousands of just web frameworks alone. New ones are coming out every other day. In a sense, we are more in danger now than we were before. Most of the “Legacy” system I have come across tend to have been written in a well established languages and frameworks and back then there were only a handful.

Now we have unlimited number of different languages and frameworks to choose from, selecting the correct tools and frameworks could either give your software a fighting chance in the long run, or cut its lifespan short.

You will see frameworks and tools popping up and getting their 15 minute of fame. Always do your due diligence and find out who is using them? are they build on something already established? Are they closed source or open source? what do they depend on? find out what issues people are currently having with them and if this would affect your software if it was to integrate them.

The answers to these questions will help you to determine if the tool and framework is right for you, to calculate the risk and figure out whether the risk is worth taking.

Avoid building monoliths

Monoliths are large single applications. These applications have traditionally been build with a main node whilst still providing multiple services. The reasons why many people see them as bad is because they become harder to maintain and support. These systems and services are built as a single application, any changes that happen to an individual service could have a side affect on the others. Not to mention that deployment could halt usage the rest of the services.

Some of the causes for monoliths include not having an architectural plan to work from, lack of understanding of the benefits of development patterns, best practices and not following them, not having sufficient change control (will cover this later).

Lack of structure to the development project could land you with a monolith that you will have to support and maintain for the rest of your life.

Avoiding the monolith and splitting your services through their contextual boundaries means that you are in a better shape to maintain, support and extend these services, and your software solution as a whole.

Edit: I should mention not all monoliths are bad. Sometimes you find yourself building your application as a monolith but what I do suggest is, that you reevaluate your decision and see if there is separate applications within your application that could be split.

Abstract your dependencies

One thing I have touched on in the introduction was the external dependencies your software may have. One of the risk you will be taking on when developing software that relies on other external systems, is that your system will only keep on functioning if those systems are still functioning. Abstracting your software from these dependencies will mitigate this risk as you are not tied to using the systems provided from a specific vendor.

Your abstraction should allow you to swap out your vendor and make it easier for you to change how your software interacts with these vendors without breaking your application.

When it comes to development of applications in the cloud, I touched on how some cloud providers will provide you with systems for their eco systems. System event messages, allowing you to run small programs and scripts (Azure Event Hub) as functions (AWS Lambdas/Azure Functions), a specific data storage offering (Azure Storage), etc… are all vendor specific platforms.

Providing an abstraction and ensuring that the core of your software is vendor agnostic will mean that you are taking on less risk and you are only providing an abstraction in order to utilize these services.

When the times comes and you want to move providers, there should be minimal efforts from the developer to provide vendor implementations to your abstractions.

Source control and defining a workflow

This goes without saying nowadays, especially when working as part of the team. Source control allows you to collaborate and work on your software within your team seamlessly.

In order to future proof your software, you will need to define a workflow that all of the team and stakeholders should be invested in. The workflow I am talking about should have links to all of your software development process, from gathering the requirements, working on the features, testing them according to the requirements, peer reviewing, QA, and deployment to the various systems (I will touch on this next).

Having a workflow already define will reduce many risks and provide your future developers with a full end to end documentation. Many of the system now provide you with the ability to track down changes and the reasons for those changes. So not only having a full suite of tests that encompass your business cases, but also the requirements gathered to implement those.

Peer review

As a software developer, no matter where you lie within your development team, you will benefit from a peer review. I wasn’t sure whether to include this, but having your team review each others code is really beneficial. It provides a platform where you will be learning from each other, and correct each other when you are not using certain patterns and tools.

How does this future proof your software? All of the points I have covered so far, and will be covering involve the software development process. So ensuring that all of the developers are working towards the same goal, and are supporting each other and most importantly, getting a sense of accountability in what goes into the system.

Automate your deployments

One of the major hurdles I have seen when it comes to “Legacy” system, was that they are depend on the hardware and the operating system they have been deployed on.

These systems could be deployed on these machines for decades and never been touched since.

When hardware or operating system vendors are no longer in support, you would want to move/upgrade. The problem is, the deployment procedure may be complex and possibly not up to date/incorrect. Sometimes, when deployments are not going so great, please are doing changes and fixes adhoc, which means these fixes and changes could go undocumented. Furthermore, when it comes to moving the system, it will not be able to function because of the differences in setup.

Automating your deployment process would reduce your time when it comes to deploying your software, and would also mean that you are not dependent on a single or a specific set of people that know how to do it. Setting up a automated deployment strategy is your deployment documentation in itself. A good strategy for deploying your software should also include the operating system and its configuration too. This will stream line your deployment, ensures that the same environment and deployment steps taking to deploy to your testing environment will be the same.

Combing this strategy with containers, would be a big benefit for the future. In containers, you are also specifying the operating system and all of its configurations which will be stored in your source control. The benefit here is, a future upgrade could be done on any machine and can be tested locally. And with any deployment, you will know you are using the same platform and configuration for all environments.

Management of change

Management of change is a tricky one and hard to get done right. No matter how much time you spent on gathering your requirements and speaking to all of the stakeholders, change will be inevitable. Having a change control strategy could save you a lot of headache and your software in the future.

You will need to handle change gracefully and ensure that you are not putting the future of the project at risk for the sake of getting the job done quicker. If it means that doing this change you have to redo a major part of the system, I think this should be seriously consider. Getting this step wrong could have serious consequences on the future development and maintenance of the software.

Monitoring and logging

Again, this should go without saying. Any good system design should consist of a monitoring and logging strategy.

You need to know that your software is working, and if isn't working as expected, you will want to be notified. This will come in very handy for when changes need to happen, whether it’s to the system or the infrastructure. And would will work very well with an established support framework.

Establish support framework

Just like your car, your software will need support and maintenance. You as the software developer, will need to provide the right tools, and documentation to help your future developers (mechanics) maintain and support it.

Workflows for support should go hand in hand with the workflow your defined from our previous point. The same channels should be visited to ensure that you are not causing more issues whilst supporting and maintaining your software.

Frequent Review of your Technological Arsenal

Just because your development team has handed over the software, doesn’t mean that you no longer need a team of experts to maintain the software. Maintenance can come in a lot of forms. From fixing bugs, handling customer/client issues, change management. But one thing I believe could significantly improve your chances for increasing the shelf life of your software, is frequent review of your technology arsenal.

All of the points that I have made until this, should be reviewed over and over again. Recalculate your risk and figure out what your mitigation should be.

This is especially important for the technology you have chosen. Is the framework, patterns and tools, maybe even the cloud platform you have chosen still the best choice. Is there any risks in continuing using them. has the risk increased? How will you manage it?

Now more than ever, we should be doing this as our external services are no longer in our control. You need to give yourself enough time to come up with a solution and a possible rework. What happens if a cloud provider is dropping one of its features? You will need to give yourself enough time to implement a suitable solution.

You will need to figure out if you have the budge and what the impact would be to take on this risk. Should you take a little more risk and invest in these cloud providers to get the benefits out of them now and have a plan (and budget) for the future if your requirements change or platform is no longer in support?

In summary

I tried to go through all of the key points that should be reviewed from the beginning of your software development process until the very last breath your software takes. We need to be active now more than ever as systems are getting bigger with many more nodes connected to them.

To recap, here are the steps that you should be taking to future proof your software,

  • Working with established patterns
  • Having a test suite covering your business cases
  • Doing frameworks and tools due diligence
  • Avoiding monoliths
  • Abstract your dependencies
  • Source control and define a workflow for your development process
  • Doing peer reviews
  • Automate your deployments
  • Have a change management strategy
  • Have a monitoring and logging strategy
  • Establish a support framework
  • Frequent review of your technology arsenal