Monolith to Microservices Migration Architecture

Jakub Moravec
MANTA Engineering Blog
13 min readMar 13, 2023

Our goal is to refactor a large codebase of an on-premise deployed monolithic application and migrate it to microservices. We would like to start working on the refactoring in all parts of the codebase, as it is going to be a long and tedious process, but we don’t want to migrate everything at once. We cannot suddenly add several new processes/services, as the platform is not ready for it. We also don’t want to start dozens of services for local development, testing, or automated tests.

What we need is architecture that is flexible enough to allow the deployment of both microservices (dozens of standalone services) and a monolithic web application from the same codebase.

In this article, I would like to share how we extended a well-known architectural pattern called Ports & Adapters Architecture (or Hexagonal Architecture) to achieve this. The main goal of this architectural pattern (as described by Alistar Cockburn) is to decouple the business logic of the application from front-end (e.g. UIs) and back-end (e.g. database) technologies. On top of this, I will describe and demonstrate how to achieve deployment flexibility by adding client libraries to the mix.

Ports & Adapters Architecture

There are tens to hundreds of articles on the Ports & Adapters Architecture out there, so I will be very brief in this section and describe only what is essential for the understanding of the enhancements we have made.

The main idea is that the business code of the service (sometimes referred to as the hexagon) prescribes Ports that:

  • anything that needs to use the service can call — the caller can be, for example, a UI (ideally not directly but via some controller), another service (directly or via a controller), or test automation infrastructure.
  • define an SPI (Service Provider Interface) for anything the service needs to execute its business logic. For example, a database, a message queue, or another service.

In both cases, the artifacts that are using (e.g., controllers) or implementing (e.g., a repository) the ports depend on the service (hexagon), while the service doesn’t depend on them. This is a significant difference from Layered Architecture, where upper layers (e.g., business layer) depend on lower layers (e.g., persistence layer). That’s how Ports & Adapters Architecture ensures that business logic doesn’t depend on technologies used for UIs, repositories, and integrations.

Below is an example of a service able to support two different technologies for inbound communication (controllers) and two different technologies for data persistence.

Initial implementation of Ports & Adapters Architecture in MANTA

With these concepts in mind, we started using the architectural pattern in the MANTA codebase. Our technology stack is primarily Java, Spring Boot, and Maven, so these are the technologies I will use to describe our initial and current solution, and which I will use in the code example.

All the entities shown in the diagram above are represented by a maven artifact in our code base. The hexagon is further split into multiple service artifact(s) (one for each use-case/module) and domain model. All the maven artifacts are linked from an app maven artifact, which produces the deployable Spring Boot fat jar.

This all is described in detail in an excellent article written by my colleague David Bucek, so I won’t go into much detail here. This architecture has proved very useful when decoupling our business logic from the implementation details of our underlying graph database.

Terminology note: the Ports & Adapters architecture describes two types of ports. One type is driving (or primary) ports. These ports are used by primary actors (whatever is using the service, for example, a UI) and drive the state of the application. The other type is driven (or secondary) ports, which are implemented by secondary actors that the service drives (uses). For example, the service logic decides when data is loaded/stored in a database. We also map this terminology to the names of Java classes / Spring Beans — classes serving as driving ports are called input ports, and classes serving as driven ports are called output ports. This can be a bit confusing — if you want to understand this better, I recommend reading Alistar Cockburn’s article that I referenced earlier.

Enhanced architecture allowing flexible deployments

As we started migrating our monolithic architecture to microservices, we soon hit the need to be able to deploy the individual services both separately (each service standalone), and together (in a simplified deployment). While most production instances of our application would benefit from all the scalability and flexibility of the standard full-fledged Kubernetes deployment, the use cases for a simplified deployment are no less significant. The most important ones are user-friendly local development and testing. Another reason why we need it is to be able to decide when and which services should be deployed standalone in the microservice deployment as we migrate from monolithic deployment.

The first part of solving this goal is to allow the services to communicate over the network when they are deployed standalone. To do that in a scalable and protocol-agnostic manner, we added a concept of Client Libraries. Each MANTA service that has a controller (e.g., REST/RPC/messaging), also provides a client library that other services can use to call that controller. The diagram below shows how Service B uses a client library of Service A to send requests to its API.

While this solution allows communication over the network, it is not optimal for simplified deployment. Yes, we could have the services communicating over HTTP even when deployed on the same host, but isn’t it a bit of an overkill? We’d need to set up all the ports and bootstrap TLS. Network communication overhead would still be in place even when not necessary.

Luckily, there is a better way. The most general solution is to create a new interface artifact for the client library and create another controller/client library implementation pair. This time, the client library wouldn’t communicate over the network but would utilize Java method calls instead.

In this article, I will simplify this solution and benefit from the fact that I do not need to change the interface of the service and the interface of the client libraries independently. I can thus join these interfaces and make my HTTP client library implement the Input Port interface (such as the Service artifact does).

Let’s see how this would work with standalone services:

There are several facts worth noting:

  • Service B business logic now does not have a compile-time dependency on the client library; it references the classes of the Input Port of Service A. By using dependency injection, any artifact with classes that implements the Input Port can be used in Service B.
  • Input Port of Service A has to define its own model (DTO classes); it can no longer depend on the Service A Model artifact. Otherwise, it would expose the internal domain model of Service A to Service B, thus creating a very dangerous coupling. Service A business logic artifact can convert the Input Port model to the domain model. That way, the public and internal models of Service A remain decoupled.

So this solution works for the standalone deployment of individual services. What has to be changed to simplify the deployment and deploy everything on the same host? Not much. We will create a new assembly artifact that will depend on artifacts of both services but will not include either the controller or the client library. All the changes can be done in the assembly artifact, and no other code has to be changed.

Example application

I will demonstrate the architecture described above in a simple example featuring a social network platform consisting of two services — Friends service and Posts service. The social network provides the following very exciting features — to list/find your friends, and to see (and potentially filter) their posts. If the name of the author is used as the filter, the Posts service needs to reach out to the Friends service and retrieve corresponding user IDs (nicknames) for their names.

I will be showing just some parts of the code in the article, but the whole codebase can be found here. The example is written in Java using the Spring Boot framework.

Let’s start with the Input Ports of both services, as that is the right place to go to understand the contract of the service.

Here is the Input Port of the Friends service, and the model class it is using. As mentioned earlier, Input Port needs to define its own model to ensure sufficient separation of concerns (otherwise, if the domain model of the service had been used, the other service would depend on it).

public interface FriendsInputPort {
List<FriendDTO> listFriends(String fullName);
}
@AllArgsConstructor
@NoArgsConstructor(force = true, access = AccessLevel.PRIVATE) // for Jackson
@Getter
public class FriendDTO {
private final String nick;
private final String fullName;
}

Now let’s see the Input Port of the Posts service. The two methods are designed in a way that the implementation clearly shows the difference between the method that doesn’t require functionality of another service and the one that does.

public interface PostsInputPort {
/**
* Searches posts by user unique id - nickname. The Posts service contains all necessary information for this operation.
*
* @param userNick user nickname
* @return filtered posts
*/
List<String> listPosts(String userNick);
/**
* Searches posts by user name. The Posts service needs to call the Friends service for this operation.
*
* @param fullName username
* @return filtered posts
*/
List<String> listPostsByName(String fullName);
}

Let’s now have a look at how the implementation of the Posts service Input Port looks like.

@Service
@AllArgsConstructor
public class PostsService implements PostsInputPort {

private final PostsRepositoryPort postsRepositoryPort;
private final FriendsInputPort friendsInputPort;

@Override
public List<String> listPosts(String userNick) {
return postsRepositoryPort.listPosts(userNick);
}

@Override
public List<String> listPostsByName(String fullName) {
List<FriendDTO> friends = friendsInputPort.listFriends(fullName);
return friends.stream()
// get the user unique ID that Posts service repository can work with
.map(friend -> postsRepositoryPort.listPosts(friend.getNick()))
.flatMap(Collection::stream)
.collect(Collectors.toList());
}
}

The implementation shows that the PostService class has a reference on the Friends service Input Port (and dependency injection provides the correct implementation of that Input Port, but more on that later). This decision can be a bit controversial. Strictly speaking, Ports & Adapters Architecture says that the Friends service Input Port should not be referenced from the Posts service business logic artifact (from the hexagon), but rather from an Output Port of the Posts Service that the business logic artifact would use. In other words, there can be one more layer of abstraction that would ensure that the usage of Friends service can be replaced with any other service easily without adjusting the Posts service business logic. I personally don’t find this worth the extra work in cases where both services are produced by the same organization, but you should consider whether that is something you might benefit from.

Both services also have Repository Output Port, Repository, and (HTTP) Controller artifacts. You can check them out on GitHub — there is nothing too dramatic going on in them.

The Friends service has two implementations of its Input Port — one being its business logic artifact, and one the client library. This will be important for the two deployments that I’m going to define later.

The FriendsService class also illustrates that the business logic artifact has to handle the conversion between the Input Port model and the Domain model of the service.

@Service
@AllArgsConstructor
public class FriendsService implements FriendsInputPort {
private final FriendsRepositoryPort friendsServiceRepository;

@Override
public List<FriendDTO> listFriends(String fullName) {
List<Friend> friends;
if (fullName == null) {
friends = friendsServiceRepository.listFriends(null, null);
} else {
String[] nameParts = fullName.split(" ");
friends = friendsServiceRepository.listFriends(nameParts[0], nameParts[1]);
}
return friends.stream()
.map(friend -> new FriendDTO(friend.getNick(), friend.getFirstName() + " " + friend.getLastName()))
.collect(Collectors.toList());
}
}

Unlike the business logic implementation of the Input Port, which uses the Repository Output Port to retrieve the entries, the client library implementation uses an HTTP client to call the Friends service Controller.

@Service
public class FriendsClient implements FriendsInputPort {
private final RestTemplate restTemplate;
private final ObjectMapper mapper;
private final String friendsServiceUrl;

public FriendsClient(RestTemplateBuilder restTemplateBuilder,
@Value("${example.friends.service.get.friends.url}") String friendsServiceUrl) {
this.restTemplate = restTemplateBuilder.build();
this.friendsServiceUrl = friendsServiceUrl;
mapper = new ObjectMapper();
}

@Override
public List<FriendDTO> listFriends(String fullName) {
TypeReference<List<FriendDTO>> typeReference = new TypeReference<>() {};
String requestUrl = friendsServiceUrl + "?fullName=" + fullName;
String response = this.restTemplate.getForObject(requestUrl, String.class);

try {
return mapper.readValue(response, typeReference);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
}

Now that we have all the code blocks in place, let’s prepare the deployments. First, let’s start with the independent deployment of two separate services.

Deployment of two standalone services

Each service will have its own @SpringBootApplication, and the artifact containing it will depend on the Controller, business logic, and Repository artifacts of the corresponding service. In addition to that, the Posts service deployment will depend on the client library of the Friends service, as that is the implementation that dependency injection will provide to its business logic artifact to fetch the list of friends from the other service.

Here is how the (Maven) dependencies of the Posts service deployment artifact look like:

<dependency>
<groupId>com.github.jakub-moravec</groupId>
<artifactId>posts-service-controller</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.github.jakub-moravec</groupId>
<artifactId>posts-service-logic</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.github.jakub-moravec</groupId>
<artifactId>posts-service-repository</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.github.jakub-moravec</groupId>
<artifactId>friends-service-client</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>

And of course, the Posts service needs to know the URL of the Friends service. It will be provided to it via its application.yml configuration file.

server:
port : 8089
example:
friends:
service:
get:
friends:
url: http://localhost:8088/internal/v1/friends

The deployment artifact of the Friends service is analogic, the only difference is that it doesn’t need dependency on any client library, and it doesn’t need the URL of any other service.

With the two deployments ready, we can spin both services up (using mvn spring-boot:run command) and call the Posts service API endpoint.

http://localhost:8089/internal/v1/posts?fullName=Jane Doe

Posts service will reach out to Friends service using the HTTP client, fetch all information about the person named Jane Doe, retrieve her nickname, and use it to query Posts service repository.

Simplified deployment

Let’s achieve the same, only this time, we won’t have two services running on two different ports. Rather, we will have only one deployment that will contain the necessary artifacts of both services.

We will need a new deployment artifact with the following dependencies:

<dependency>
<groupId>com.github.jakub-moravec</groupId>
<artifactId>friends-service-logic</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.github.jakub-moravec</groupId>
<artifactId>friends-service-repository</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.github.jakub-moravec</groupId>
<artifactId>posts-service-logic</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.github.jakub-moravec</groupId>
<artifactId>posts-service-repository</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.github.jakub-moravec</groupId>
<artifactId>posts-service-controller</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>

The business logic and Repository artifacts of both services are needed to provide the required functionality. HTTP Controller of the Posts service is needed as that is the API endpoint that we will use for testing the deployment, but we don’t need Controller or the client library artifacts of the Friends service, as there will be no HTTP communication between the two services. Instead, the FriendsService class will be the implementation of the Friends service Input Port, that dependency injection will provide to Posts service. Posts service will thus perform a simple java method call to retrieve the list of friends.

The deployment can be tested using the same HTTP call as the previous one.

Code Organization

Let’s now have a look at how the code is organized. As mentioned earlier, the classes are organized in Maven artifacts corresponding to the boxes in the diagrams shown above.

Friends Service:

  • friends-service-app (deployment artifact of Friends service)
    - FriendsServiceApp.java
  • friends-service-client
    -
    FriendsClient.java
  • friends-service-controller
    -
    FriendsController.java
  • friends-service-inputport
    -
    FriendsInputPort.java
    - FriendDTO
  • friends-service-logic
    -
    FriendsService.java
  • friends-service-model
    -
    Friend.java
  • friends-service-outputport
    -
    FriendsRepositoryPort.java
  • friends-service-repository
    -
    FriendsInMemoryRepository.java

Posts Service:

  • posts-service-app (deployment artifact of Posts service)
    - PostsServiceApp.java
  • posts-service-controller
    -
    PostsController.java
  • posts-service-inputport
    -
    PostsInputPort.java
  • posts-service-logic
    -
    PostsService.java
  • posts-service-outputport
    -
    PostsRepositoryPort.java
  • posts-service-repository
    -
    PostsInMemoryRepository.java

Simplified deployment:

  • simplified-deployment (deployment artifact)
    - SimplifiedDeploymentApp.java

To understand the dependencies between the artifacts, please see the diagrams above, or go through the Maven pom files in the git repository.

You can see that adhering to the architecture described above means that the code organization will get quite complex. For two services and two deployments, I had to create 15 Maven artifacts. It is important to have this in mind when thinking of adopting this architecture. The overhead related to the many artifacts might not be worth the gains for small and medium-sized applications. Also, make sure that your CI/CD infrastructure is ready for it.

On the other hand, it is also good to mention that while the code organization is more complex, the code itself was not affected at all as a result of the selected architecture. If you decided to go for Layered Architecture instead, your code base would probably consist of a very similar set of classes that would contain very similar code constructs. This shows how much more flexible we can make the application while only changing the structure of the artifacts and their dependencies.

Conclusion

I’ve described and demonstrated in a working example how to make small adjustments to the code structure and achieve flexible deployments of any application consisting of multiple services. This approach can be very helpful when migrating a monolithic application to microservices architecture, as it allows for gradual — no big bang — migration. You can incrementally refactor your entire codebase while keeping the deployment simple and then just decide which services should be deployed standalone and when.

--

--