Modernize Monolith Java Application to Containerized Microservices — Case Study Guide
Move from Java monolith to spring boot in 4 steps!
I have recently consulted with a company where the client primarily uses Java as the technology stack. Most of these applications were 5–15-year-old Java applications written in pre-Java1.8 using spring XML configs. To better support and enhance these applications, we have undertaken app-breakout and modernization efforts. Here are my finding and steps which others can take, which may help others in a similar attempt. Though this article focussed on Java application, conceptually, I feel this can be applied to other technologies as well.
Each step can be expanded to a separate article (and I would do that later, but wanted to put my experience here to get some feedback)
Step 1: Get some test coverage
Before we started on changing any code, the first thing we did is creating some integration tests.
- Write Integration tests first. Identify entry points of the application (like Servlet, Controller, primary method) and focus on that to create Integration Tests.
- Create Integration tests. As the application code was written a relatively long time back, the system was not very “mockable.” That is the biggest hurdle to cross.
- Generate code coverage report (tools like JaCoCo) and review to see what conditions are not covered. Iterate on this and the previous step a couple of times to get your desired code coverage and test coverage.
The reason to start with this to ensure after we finish our refactoring and app breakout, we can use this integration test result as the baseline to ensure that we are not adversely impacting after the app breakout is finished.
Step 2: Identify logical new applications’ boundaries
This is probably the most difficult activity, depending on whether you have sufficient knowledge about what the app is doing, from the business process point of view. In my case, I did not have that knowledge. So while creating the Integration tests (Step 1), I gained a certain amount of expertise. This is also the time to find out somebody from the organization who you can talk to understand more about the application’s usage (Gemba!!).
- Identify new application context boundaries. Preference is based on “domain,” so we are splitting the application vertically, but given how most of the old Java applications were written, where we typically package classes based on layers (like, controllers/DAOs/Models/Services, etc.) that may not be entirely possible.
- We took a pragmatic approach where we separated the Data Access Layers (like communication with Mainframe through IBM MQ) as separate service (horizontally sliced), and then vertically slice wherever we can.
Step 3: Wrap the new applications under spring boot
This is the most fun part of this refactoring effort for developers. Here is our rule of thumb. (But as with any law, these are true with given context).
- Wrap finite processes (like a batch job which is usually run as Cron) run under spring boot scheduler.
- Wrap infinite processes (like web/servlet services) run as spring boot RestController
- Wrap horizontally split applications (like communication with Mainframe through IBM MQ) under RabbitMQ or Kafka or even RestController.
- As an additional effort, we upgrade different libraries to close to the latest versions (or LTS version) without changing too much code.
Step 4: Convert applications using applicationContext.xml to annotation-based bean declaration
This step, we initially did not take. Big mistake! As we moved, these newly created microservices to the need for monitoring and config management became extremely important. And as we just wrapped the old code under spring boot, and still kept the bean definitions in XML files, we were not able to use the spring boot’s magic of config servers, actuator effectively.
- Migrate bean definitions from XML to *Config.java file
- Access the beans by auto-wiring ApplicationContext which is available in spring boot
- Move configuration properties to the config server of your choice
- We have now created a set of applications that we put in docker-compose to ensure we have a documented way of running the old monolith application even though we have split them for better manageability.
- One of the most significant benefits of starting with creating sufficient code/test coverage is every step of the way; we were confident that we did not break anything while refactoring the code.
- One of the benefits we will get by splitting the application vertically for communication with Mainframe through IBM MQ, we can manage throughput/backpressure within that services. Historically during peak shopping season (Black Friday/Cyber Monday), we tend to overwhelm Mainframe, which we think now can be managed more effectively
Thanks for reading this. I will create a few more articles that will go in-depth of each of these four steps with code snippets/examples. Feedbacks are welcome!