Microservices: Strategies for Migration in a Brownfield Environment

You don’t always get to start afresh

In the start-up world, there is the opportunity to start afresh, on a new greenfield platform using new technologies and techniques with no technical debt and the chance to “do it right” from the beginning. Chances are you are not in that world. Instead you are probably dealing with an existing brownfield monolithic platform, bringing with it all of the sins of the past — quick fixes from previous projects that were running behind schedule, legacy techniques, limited automation both in deployment and tests, antiquated and out of date systems.

Microservice based approaches have rapidly been accepted as the means to achieve a flexible and scalable architecture that supports the fast and agile development demanded in today’s business environment. Moving an existing monolith to such an architecture has some major challenges. There may be the temptation to do a complete rewrite but this brings risks — project scale and cost, big bang deployment, a long-time frame before any business benefit is achieved or visible. Instead, an evolutionary approach, a strategy which Martin Fowler refers to as the strangler application , which progressively migrates away from the existing monolithic to a microservices architecture, until the monolithic application is eliminated or it becomes a microservice, reduces risks and allows benefits to be achieved earlier.

This article outlines approaches to support this strategy, starting with a model application shown below (A), which represents a typical monolithic application in a 3-tier architecture with UI, Service and Data Layers which are deployed as a single unit:

Stop

The first step in migrating away from our monolithic architecture is to stop the monolith from growing further. This is done by making the decision that all new development, bug and security fixes aside, will create separate modules to stop the accumulation of more functionality in the monolith.

Isolate

A key enabler to support the migration activity is to take advantage of the usual separation of the UI and Service layers in the monolith, placing an API façade over the services in the monolith and having the UI layer call the API, refer to diagram B.

Create and Migrate

As new functionality (diagram C) is created new separate services should be established with functionality exposed via API. APIs should be placed behind an API Gateway, to make these and future changes as transparent as possible to the UI Layer and other client services.

APIs are typically synchronous and can hamper scalability, in particular when connecting to older legacy components that can only be scaled vertically (buying a faster machine), rather than horizontally (adding more machines). Implementing asynchronous methods using message queues such as Amazon SQS/SNS or RabbitMq to issue and respond to events (Event Dispatcher and Event Listener) between components should also be considered to improve the resilience of the overall platform, by decoupling services and providing messaging buffering.

These initial services can establish a Reference Architecture to guide future service development and establish patterns to be used, for example Event Sourcing, Command Query Responsibility Separation (CQRS), or Circuit Breaker.

The initial services also provide the test bed for establishing a Continuous Delivery release chain.

It is important that any poor practices that contributed to quality, testing and deployment issues in the monolith are actively addressed at this stage if the microservices architecture is to meet its aims. Continuing poor practices can move a platform from having a single problem to a distributed set of problems.

Considerations should include:

  • Service Discovery
  • Unit Testing
  • Continuous Integration
  • Functional Testing
  • Performance Testing
  • Security Testing
  • Static Code Quality and Security Analysis
  • Configuration Management
  • Monitoring and Operations
  • Automated Deployment of both code and database.

Microservice creation should be easy and rapid. Providing a standard service boilerplate that includes support for cross-cutting concerns such as externalised configuration, logging and monitoring can assist in this aim. This shouldn’t preclude variations in technology such as different data storage technologies on a needs basis.

To attack the core of the monolith the next step (D) is to move through a process of extracting services from the monolith into separate microservices.

The initial phase will see business logic move to separate microservices but there may be a residual requirement for connectivity back to the monolith’s data source to read and write data. This connection code could take the form of an API, message queue or code that supports direct access to the monolith’s data source. Which ever option is taken the connection code acts as an anti-corruption layer which provides translation between the domain of the new services and the domain of the monolith.

In subsequent iterations this dependency can be removed so that the service owns its own data source to achieve the desired “data source per service” target state. This process (D through E) is then repeated for subsequent services.

The choice of which services to extract can be difficult. Contributing factors shaping this decision can include services that are:

  • most frequently changed.
  • more discrete with fewer dependencies on other services and database entities.
  • the most error prone with the express aim to improve quality and establish automated testing around these service as part of the extraction.
  • resource intensive and would benefit from the ability to be scaled independently of the monolith.

Mikado Method

A supporting approach in this process can be to apply the Mikado Method outlined by Ola Ellnestam and Daniel Brolund to map out the dependencies. The Mikado Method is an exploratory method for planning change containing 4 steps:

  • Set a Goal
  • Experiment
  • Visualize
  • Undo

The Mikado method offers several advantages:

  • Supports making incremental rather than big bang changes.
  • Increases communication and collaboration by providing a visualization of the changes that can be shared, reviewed and contributed to by a group to access everyone’s knowledge.
  • Lightweight and goal focussed — easy to learn and can be carried out with pen and paper or whiteboard.

In the context of planning for splitting out of services we can set a goal of moving the Member Service into its own microservice. The experimental phase is one of reviewing the Member Service and identifying dependencies that need to be addressed in step-wise fashion to achieve the goal. This is visualized as a Mikado graph which can then be used to validate and plan for the change as well as supporting collaboration and communication of the path to the goal.

The Undo step supports things going wrong. Hopefully the experimental and visualising phases fully identify all dependencies. In the case where something is missed the Undo step, in the spirit of “fail fast”, provides us with the steps to go back and redefine the dependencies for the change.

Legacy in a Container

In some environments the monolith may be connected to older legacy platforms, perhaps a legacy accounting system that cannot removed, or we may be left with residual components of the monolith for an extended period of time. As a microservices architecture evolves these components can become a drag on team productivity. Strategies such as Creating Virtual Machine Images or Container based instances of legacy which are part of the developers sandbox build can reduce this drag.

Service Mocking and Virtualization can also be used to represent the interfaces with Legacy components and any third party APIs that the platform may interact with. These can provide consistent result sets for both success and failure conditions that services need to support.

Organisation and Collaboration

Moving to a microservices architecture from a monolithic architecture can be challenging organisationally. Monolith architectures typically have a background of siloed teams where the development team passes changes to a test team and then to the deployment team and finally to an operations team. Microservices can and should be a catalyst for organisation change.

Microservices support moving to a team structure where each team is responsible for one or many microservices. These teams are multidisciplinary and include developers, designers, testers and dev-ops skilled team members, among others. The aim being to provide collective ownership of the services from conception to operation, instilling a focus on quality and automation.

Enhancing collaboration between microservices teams is also important, so that lessons learned are shared between teams as part of the microservices initiative. Establishing a level of standardisation and governance is also an important collaborative step — technology standardisation ensures that what is development is deployable on shared infrastructure. API contract standardisation and documentation ensures that microservices can be consumed and reused across the product and between teams.

Wrapping Up

A carefully executed and progressive migration away from existing monolithic applications which pragmatically addresses any remaining legacy components in the release chain and which includes corresponding organisational changes to support the change in architecture can assist in achieving faster and more agile outcomes in a brownfield environment.