The Upsides of Microservices & Software Tests for Modern, Evolving Architectures

Daniel Schulz
Cloud Architectures
7 min readDec 31, 2018

--

In today’s modern software projects, there is basically two kinds of projects we face. We could take over an existing project with an existing code base or start from an empty scratch. Applying good practices to your software project is hard work and demands discipline and proficiency. But what am I supposed to do when I face an uphill race? What if the existing code sets me up for failure due to a lack of documentation, unit tests, possibly bad code quality and missing architecture sketches? I’ll try to suggest some means I tend to apply in this short post.

The Status Quo: there is Legacy Code All Over the Place
Either we maintain, augment, extend, change an existing code base or we start from scratch. The former case is common and may account for roughly 4 in 5 software projects today. As there is very much existing software, it ever so often needs to have a little change applied. Those might be for legal reasons, because of market changes, there may have been an error detected or a user interface needs some polishing. Here, much time for new developers to this code goes into understanding the status quo and all dependencies of it. We’re talking about the inner and outer relationships, APIs and coupling from and to external systems.

Having too much baggage, makes any endeavor uphill race

The latter case, having no prior code to deal with is rather rare. This often occurs when there is no existing software yet, the existing software is not scalable, too dated or the change would be too risky or too expensive. Those “Greenfield projects” are beloved by developers new to the organization because there is not much baggage to deal with and the new solution leaves much room for creativity.

On another dimension, there are two prominent kinds of deployments for IT systems nowadays: monoliths and Microservices. The former means to have one single, consolidated code with everything wrapped up and packaged into one. In Microservices, everything is split up into smaller units with very much interaction. In the Java world, prominent examples might be JAR, WAR and EAR archives: they could either be monoliths if there is only one or many single deployments for Microservices.

Isolation à la Docker makes sure no one service interferes with one another

Monoliths tend to favor homogeneity: all code is written in one or a small amount of programming languages. The environments are clearly defined and may very often follow strict code styles, approaches to documentation and tests. The complexity grows with the project both within itself and with every external API.

In homogeneous environments, all parts look the same

Microservices however are heavy on interacting with one another. Quiet naturally, the complexity depends on the exchange of information to and from external systems. On the other hand, all Microservices are rather heterogenous: each one of them may use another programming language, approach to documentation and testing. The encapsulation in Docker Containers even abstracts away the differences in its environment. So, changing or discontinuing a Microservice is very much understanding ripple effects: which other parts of the system might rely on it or used to rely on this application? It is rather helpful to make this clear before you are going to get rid of it.

In heterogeneous environments, all parts differ from one another

Whenever you change existing software you always risk altering the behavior of this or other features in unintended ways. Like fixing a problem or bug method A might not only affect method A. Other methods B, C or external APIs might experience unintended changes when relying on method A. Or there might be some odd side-effects occurring which causes other methods to be messed up after “fixing” method A. You so much focus on this one method you are repairing and somehow you might interfere with other parts of the system you might not even be aware of. When having a decent test coverage of important units of your code, you greatly minimize this risk to affect your system. Because those bugs can be effort-intense, hence expensive to fix. Usually, with growing developer count and the longer a system prevails, the more likely those complex side-effects and issues become.

Quality is Worth a Little Extra in the Long Run
Software used to have a long life in our day and age. Starting from very old legacy apps still running the automotive, financial and insurance industry to very modern applications. They all have endured a long time from their inception — probably longer than any of the developers ever intended to. This is, because they are crucial to running the business. On top of it, legal changes, changes how the business is done, and the ever-evolving IT industry make it ever more important to change existing code. For this reason, documentation and tests become ever more important.

This is especially important when you are dealing with monoliths. A Microservice environment might replace on part with a new one once it becomes un-manageable or too costly to keep up. Monolithic deployments will last for a longer time though.

Changing and augmenting software was and still is rather complex and error-prone. This makes a strong case for doing your due diligence when delivering software. Although it makes the delivery a little costlier up-front, in the long run it pays to have a solid base to start from. Or like the Thomas Frank’s father used to say: “there are only two ways to do things: right and again.” So, putting in a little extra effort can go a long way in preserving your initial investment in your deployments, code and architecture.

Consider Migrating to a Microservice Architecture
One solution might be to encapsulate all deployments in Microservices. Regardless of whether you are running them right now, e.g. Docker makes it easy to encapsulate everything in a Microservice. Even your monolith might end up in one. This means migrating to a Microservice environment where all legacy code and new deployments will end up in. All extensions to your Solution or Enterprise architecture might become foremost new Microservices then. Some might alter the monolith itself — but the primary objective would be to add new or change existing applications in your architecture. Docker is also good at running legacy applications: they support almost all flavors of Linux and perfectly abstract away the specific operating systems’ features from the host and other applications. So, nothing interferes with another thing. Additionally, legacy Windows apps for dated OS versions for like Windows Server 2008 can be run on a modern, patched OS. This way, the discontinued versions of legacy operating systems can be used without the fear for security glitches.

Testing software is not always fun — but just as important as the implementation part

Test First, Implement Second
Another approach is to use Test-driven Development techniques — so called TDD. Usually, a software project goes like this — very simplified:
1) define your business requirements
2) define a Solution Architecture fitting those needs and potentially an existing Enterprise Architecture
3) start developing code based on the former
4) test the written code using business examples to make sure, it does what it is supposed to

In TDD, the order in items 3 and 4 is put upside down:
1) define your business requirements
2) define a Solution Architecture fitting those needs and potentially an existing Enterprise Architecture
3) test the written code using business examples to make sure, it does what it is supposed to
4) start developing code based on the former

So, we are writing test cases first and implementing the software/”grey box” (sic) right after. When we define all necessary tests first and later only implement code if not all tests turn out to be successful, we hope to come up with leaner code bases. The code can be much cleaner, easier to understand and it saves us from over-engineering a solution. The key term here is “golden handles:” unnecessary things, which might take much effort to come up with, but which have no business value down the road. Additionally, TDD makes it easier to document all critical APIs and important methods as there is an empty skeleton to start from. So, your code base benefits from TDD approaches just as much as the business itself. As the costs of changing and maintaining code bases depends on the lines of code (LOC), there is value in leaner code bases without “golden handles.”

Resumé
Software development is a complex topic and there are more than enough means to stay on top of your game. When taking over existing code, maintaining, changing or augmenting it and when working in Microservice environments: be a positive example and follow best practices like documentation, unit tests, good code quality and architecture sketches. Software projects tend to have long lives, become complex and error-prone rather quickly. There is large risk to mess something up down the road to fix something else. Whenever you have reasonable tests in place, this risk is much mitigated in the first place.

When taking over an existing code base, always make sure to ask for as much information as possible. Make sure everything you’re supposed to deliver serves as input for you as well. Additionally, when there might not be everything needed in place, the business owner is supposed to know this would be much beneficial to you. This not only backs you up in demanding time to dig into everything before writing your first code yourself. This also makes sure, future projects come up with documentation, tests, good code quality and architecture sketches. Whether you are applying TDD and/or Microservices, make sure to follow best practices — for yourself in the near or distant future. This will help your potential client as well.

--

--