Better living and smarter development through microservices chemistry

Adarga
Adarga
Published in
6 min readJun 16, 2021

--

The adoption of a microservices architecture during the early stages of the design of Adarga’s AI products has largely proven to be the right thing to do. It has provided us with many benefits over the more traditional monolithic software development approach, including the ability to clearly separate concerns of our system and to organise our development resources and expertise accordingly. It has also allowed us to integrate disparate technologies and languages using well known network-based protocols rather than often awkward cross-language library-based integrations. Many of these benefits have undoubtedly enabled our development team to deliver faster. These are the types of benefits commonly hailed by engineering teams that adopt microservices, especially when moving away from a monolithic architecture (i.e. single large codebase and/or software product).

However, a microservices approach is no panacea — it is not without its limitations and pitfalls and can readily scupper a deployment, despite the best intentions of even the most weathered developers. Some of these lessons have been learned the hard way, though thankfully not to the detriment of building and maintaining a functional product.

The first point to note is that the term “microservices” is often casually followed up by the term “architecture”. This is largely a misnomer. Microservices is an approach for creating the building blocks of your system — it does not magically provide the means to connect them together. This, as all good textbooks say, is left as an exercise for the reader. The selection of the correct “integration glue” is a key decision in the design of your system and can greatly affect the complexity, throughput, resilience and scalability of your system as a whole. Given the number of options available now for both transactional and event-driven technologies: moving from REST being the predominant API technology to other API flavours including GraphQL and gRPC, as well as an eye watering range of messaging technologies that underpin the increasingly popular Reactive programming paradigm, this choice is not for the faint-hearted.

Getting the integration technologies right will ultimately be a trade-off about what the system is trying to achieve and how easy you can make it for the people working on it while at the same time trying to minimise your degree of tie-in to those technologies. We have found that complexities arising from the choice of integration technology can often be alleviated effectively by spending a degree of effort on creating and maintaining utility libraries and service templates which abstract away the underlying integration technologies and minimise the amount of boiler plate code required. These activities have the traits of software design principles (separation of concerns, DRY, dependency inversion, etc.) that have been practiced successfully within the software community for a long time — it is no surprise then that this has proven to be time and effort well spent.

Other potential microservice trip-hazards to be aware of largely arise from it being “easier” for developers to integrate disparate technologies and languages. I intentionally quote “easier” here as it depends on your perspective — microservices force us to use network comms for integration rather than function calls made between software libraries which can bring a different set of challenges.

Firstly, testability, formerly just requiring a (mostly) unit testing based approach in monolith-world, now requires a good degree of integration testing in microservices-world to give us a good feeling that our microservices are production ready. This adds complexity in the form of being able to orchestrate services and their dependencies correctly to be ready for integration testing, state management (e.g. will the data created for test A still be in place for test B and will it affect the outcome?), orchestration, resilience, etc. We have found that containerisation technologies such as Docker and related orchestration tools such as Docker Compose and Kubernetes have helped us a great deal in this regard.

Secondly, because microservices opens the door to be able to integrate a wide variety of languages, there can be a temptation to do so too readily. From experience, it seems that this temptation should be tempered with what you want your stack to look like in the future. Introducing unnecessary technology/language overhead will produce siloing of knowledge (“Anyone know how to fix this Lua service?”), affect the barrier of entry for new developers joining the team and ultimately dictate what kind of developers you may need to hire in the future (is a fluent, polyglot team really achievable?).

Thirdly, a key benefit of a microservices approach is that it facilitates the partitioning of your system which can be leveraged to effectively produce systems which are both fault tolerant and elastic in nature. However, if not considered carefully, it also opens the door to potential “over-partitioning” whereby the concerns of individual microservices become so specific that the broader concerns of your system end up distributed across various parts of it, i.e. a distributed monolith. The traits of this are that unnecessary dependencies are created across multiple microservices in order to achieve a single goal. Conversely, it is also possible for the concerns of individual services to expand over time leading to services which tend towards monolithic on their own. It is not always easy to make the call on how to best prevent these phenomena, but one common pattern would seem to be to not over-optimise too early in the design process, i.e. focus on the functionality first, worry about lower level concerns (e.g. scale, performance, throughput, etc.) and the scope of services later through continual review and iteration. This is good practice in general and follows the Agile software development tack of working toward near term goals and then iterating regularly.

Another aspect of microservices is that the more divergent the language/technology stack, the more scope there is for microservices to not conform to a common set of requirements necessary to run them in production, such as maintaining some consistency in logging, monitoring, health-checks, etc. A solution to this would seem to be to have agreed rules of engagement for a limited stack (e.g. we will use language X for this type of microservice and language Y for that) and then service templates and utilities libraries can come to the rescue again to ensure that production requirements are adhered to.

Finally, it’s probably worth mentioning some of the technologies that have enabled our journey through microservices. The now ubiquitous Docker has provided us with the ability to deploy different flavours of service containers with different runtime dependencies in a simple, manageable manner with minimal development overhead. Additionally, Kubernetes (almost accepted as de-facto standard for container orchestration if the current swathe of cloud offerings are anything to go by) has allowed us to deploy, configure, scale and manage our services in a production environment with minimal up-skilling. Providing SaaS AI products without either of these technologies now is almost unthinkable, not including emerging technologies such as serverless and FaaS platforms.

Adopting a microservices approach at Adarga has been a key enabler in the development of our AI products. It has permitted us to optimise our use of development resources by minimising contention and conflicts when implementing code changes as well as clearly partitioning the responsibilities of the system. It has also given us the ability to quickly integrate new features into existing services and new services into the system as a whole with minimal disruption. In the context of AI, where the pace of technological change is rapid and the range of services providing data and functionality is continually evolving, this need to move fast is essential.

It is therefore highly probable that microservices will continue to be at the core of Adarga’s AI products for the foreseeable future — at least until the next game-changer comes along.

--

--