Building on the shoulders of giants

1: Microservices as a redesign strategy

Gideon de Kok
7 min readOct 24, 2016

With all the material already written, large-scale implementations, and the ever-increasing interest in the topic we can clearly assume that the concept of microservices is here to stay.

Nevertheless, the representation of microservices is opaque. Of all the methodologies present in the current development ecosphere, microservices possibly has the least delineation. Yet, the concept of microservices has the potential to deliver the largest impact on the way we will build large scale applications in the years to come.

As with a lot of methodologies and technologies, there is nothing new behind the ideas of microservices. A lot of other concepts, like for instance machine-learning, became immensely successful after the required resources came available to power large scale implementations. In a comparable way, the rise of easy deployable containers just made the concept of microservices a much better fit for our demands and capabilities.

This blogpost marks the start of a series of blogposts. Each with a focus on helping others to make the transition from monolithic to microservice backed software architectures (or to simply gain more information on the matter). This post serves as an introduction and sketches the motivation behind the series.

Easing into the monolith

The massive diversity in lighter weight MVC frameworks built a decade ago, was driven by a negative sentiment towards the state of web application programming. Back in those days, and for a larger part, the market existed out of either too opportunistic PHP and Perl frameworks or too heavy, inflexible enterprise focused frameworks on Java and .NET.

With optimism, developers embraced like the Zend Framework, Ruby on Rails and Django. Instead of implementing endless interfaces. Finding your way through pages of documentation. Packaging applications using often abhorrently inflexible build tools, developers had something which scaffolded a simple model definition into a complete REST based application with pluggable database access. CRUD paths with automatically generated views to mutate data through ready-to-use web interfaces. And yes. All that with even a slightly enjoyable way of handling dependencies and serving applications to the internet.

Web frameworks on C# and Java quickly catched up with this convention over configuration approach. Focusing on better ways to land developers into the framework, improving developer productivity by reducing boilerplate and lowering domain complexity. MVC, ORM and code generation were the new triangle to kickstart application development and to ease teams from ideas into working code.

While this renaissance of web frameworks helped to make development more approachable, it didn’t bring anything new to the table in terms of code management for large scale applications. The increased rate of productivity in the starting stages of application development actually worsens the problems in a lot of cases by overgrowth; it is easy to add functionality by layering abstraction over abstraction, library over library, brick after brick. But the result it is far less easy to maintain when things need to change or scale within a pillar of dependencies.

Nevertheless, this monolithic way of building applications has a fit. Not every application is complex or grows beyond the bounds of a good and dedicated development team. Good design, the right constrains and a lot of team effort has helped a lot of companies scale to millions of users. Majestic monoliths can be built with a lot of dedication and direction. Unfortunately, most developers are familiarised by monoliths’ strengths and weaknesses on accident, when it’s probably too late.

Monoliths as millstones round our necks

With the rise of new-IT backed companies in almost every segment; from retail to financial institutions, Traditional companies are often forced in change or perish strategies.

Where the business strengths of newer competitors are often enforced by strong — serial startup developers–able to integrate the experience of previous failures into completely new stacks. Older companies’ businesses often rely on legacy software. Often composed into monolithic software stacks, with a team morale pushing talent out of the companies faster then they can persuade new recruits.

As internet facing services are becoming the most crucial factor in a company’s business value, it’s becoming harder and harder for more traditional companies to keep up. Performance issues through the lack of elasticity in software stacks, lack of business flexibility through technical impediments and overall lack of agility through long development cycles.

A complete redesign or rebuild of underlying software stacks would probably be the best approach to gain back performance and overall developer satisfaction in most cases. This strategy will, however, carve a big chunk out of a company’s resources. Besides that, successfully redesigning a complete system in parallel with normal business stays a big bet.

The rise and strengths of microservices

A common interpretation of the strengths of microservices is formed around the idea of scalability: by sharding application logic into multiple smaller components, overall vertical and horizontal scalability can be improved were it hurts. While elasticity is a trait which occurs when systems are composed of independent services, the true strengths of microservices go far deeper than runtime performance.

Microservices as a concept formed around ideas to improve developer productivity, simplification of deployment workflows and the extension of reactive traits in architectures; systems which are responsive, more resilient to failure, flexible in terms of scalability and as loosely coupled as possible. The idea is to separate applications into several smaller services, each with its own specific responsibility, running in isolated manner without direct dependencies outside of its context.

A large increase in the number of services normally equals in the same amount of work which has to be done in dev-ops. However, technologies like Docker and Kubernetes have brought the deployment of services to a point where it’s almost effortless. The enabler for the concept of microservices is the ability of factoring out the human effort needed to bring new versions of services up and running in continuous fashion.

Backed by this ability of continuous deployments, the organisation of functionality into several dedicated and isolated services further improves the development process. Teams of developers can be formed around specific capabilities of an application. By exposing state and functionality only through technology-agnostic protocols, development can be done on a basis in which cross-team dependencies minimised. This not only reduces the amount of impediments during development by eliminating external factors. It also makes teams autonomous and improves self-organising behaviour.

By limiting communication towards and between services only through well-defined APIs or data flows, the internal technological choices are far less important. This makes services easily replaceable and re-factorable. It also creates a opportunity to redefine software architectures without complete rewrites. When services are modelled as capabilities of an application, the total system can be treated as the composition of these services into something usable for an end-user or other consumer.

After a definition has been made for the capabilities of a application and the communication towards and between these services, an initial application structure can be built to fill in the functionality. However, the same model can just as easy be used to retrofit existing technology into a new architecture.

Where to start with microservices

I think the true strengths of microservices find their origin in exactly this; it enables us to define business ideas and software capabilities in isolated services. Not only making it easier to work on large scale applications in separated teams without the problems which may occur when working in large shared codebases. Microservices also make it possible to scale existing (legacy) code into new future proof models, creating an organic and hybrid way of phasing out old technology with new technology. An approach which potentially is far more resource friendly then the usual big-bang strategy and enables a faster time-to-market for newly developed functionality.

This not only enables organisations to introduce new technology and new ideas in a gradual way, it also gives back the business flexibility a lot of companies need in this current competitive market.

Getting there — building a new application or redesigning a older application using microserves — isn’t always that straightforward however. There is a lot of legitimate criticism towards the approach. Modelling applications into isolated capabilities isn’t always trivial, there are still cross-teams dependencies in terms of required functionality and API definitions, correct testing and deployment is more complex matter as opposed in monolithic architectures and the overall architecture complexity increases tremendously on a higher level. Besides that, kickstarting a greenfield project from scratch using microservices can potentially hinder initial progress making a monolith first approach a potential better strategy.

Nevertheless, it’s done with good reason and success. Not only by the start-ups with a large number of rock-stars and (almost) endless resources, but also by companies which implement the concept to regain their edge.

In the process of helping clients with the transition towards microservices, there is a lot of similarity in the questions asked and the design choices made. Questions and design choices which can help others to ease into the process and help them out in potentially blocking situations.

The next post in this series will detail an overall overview of an example microservice architecture and the potential choices to be made in terms of inter-service communication. Information which will be used to take a step-by-step approach towards the migration of a monolithic architecture towards one based on microservices.

This blogpost will be updated when the next post in the series will be available.

--

--

Gideon de Kok

Between functional programming and reactive software architectures.