How to build a CI/CD pipeline for your enterprise middleware platform
Continuous Integration and Continous Deployment (or Development) a.k.a CI/CD is one of the most talked about ideas in enterprise software development. With the rise of microservices architecture (MSA), it has become a mainstream process within the enterprises. If you are familiar with microservices architecture, you should have heard about green-field and brown-field integrations where you start your microservices journey from scratch or from an already existing enterprise architecture (which is the 80% case). According to this[1] survey, there are more and more organizations moving ahead with the microservices architecture even though they accept that it is really hard to maintain and monitor (of course there are tools coming up to cover these aspects). This survey showcases that advantages of MSA are outweighing the disadvantages which I mentioned above. CI/CD is a tightly coupled concept along with the MSA and the DevOps culture. Due to the dominance of MSA within enterprises, CI/CD has also become an essential part of each and every software development lifecycle within the enterprise.
[1] https://go.lightstep.com/global-microservices-trends-report-2018.html
With this shift in the enterprise towards MSA, DevOps and CI/CD culture the other parts of the brown-field cannot stay out of these waves. If we think about these “other parts”, they consist of
- Enterprise Middleware (ESB/APIM, Message Broker, Business Process, IAM products )
- Application Server (Tomcat, Websphere)
- ERP/CRM software (mainly COTS systems)
- Homegrown software
Sometimes it might not be practical to implement CI/CD processes for every software component mentioned above. In this article, I’m going to talk about how we can leverage the advantages of CI/CD process within enterprise middleware components.
Let’s start with one of the most common enterprise middleware product which is an Enterprise Service Bus (ESB). These ESBs provided the central point which interconnects heterogeneous systems within your enterprise and adds value to your enterprise data through enrichment, transformation and many other functionalities. One of the main selling points of these ESBs was that they are easy to configure through high level Domain Specific Languages (DSLs) like synapse, camel, etc. If we are to integrate ESB with a CI/CD process, we need to consider 2 main components within the product.
- ESB configurations which implement the integration logic
- Server configurations which install the runtime in a physical or virtualized environment
Out of the above 2 components, ESB configurations are going through continuous development and change more frequently. Automating the development and deployment of these artefacts (configurations) is far more critical and important. The reason is that going through development, test, deploy lifecycle for every minor change will take a lot of time of the engineering staff and resulted in many critical issues if we don’t automate that. Another important aspect when automating the development process is that we assume that the underlying server configurations are not affected by these changes and keep the same. It is a best practice to make this assumption because having multiple variables makes it really hard to validate the implementations and complete the testing. The below figure explains a process flow which can be used to implement a CI/CD process with an ESB.
Figure 1: CI/CD with middleware platform (ESB)
The process will automate the development, test and deployment of integration artefacts.
- Developers use an IDE or an editor to develop the integration artefacts. Once they are done with the development, they will commit the code to GitHub
- Once this commit is reviewed and merged to the master branch, it will automatically trigger the next step
- Continuous integration tool (e.g. Jenkins, TraviCI) will build the master branch and create a docker image along with the ESB runtime and the build artefacts and deploy that to the staging environment. In the same time, the build artefacts are published to nexus so that they can be reused when doing product upgrades
- Once the containers are started, CI tool will trigger a shell script to run the postman scripts using newman installed in the test client
- Tests will run against the deployed artefacts
- Once the tests are passed in the staging environment, docker images will be created for the production environment and deployed to the production environment
The above mentioned process can be followed for the development of middleware artefacts. But these runtime versions will get patches, updates and upgrades more frequent than not given the demands of the customers and the number of features these products carries on. We should consider automating the update of this server runtime component as well.
The method in which different vendors provides updates, patches and upgrades can be slightly different from vendor to vendor. But there are 3 main methods they will provide updates.
- Updates as patches which needs to be installed and restarted the running server
- Updates as new binaries which needs to replace the running server
- Updates as in-flight updates which will update the running server itself (and restarted)
Depending on the method you get the updates, you need to align your CI/CD process for server updates. The following process flow (Figure: 2) defines a CI/CD process for server updates which will happen less frequently compared to the development updates.
Figure 2: CI/CD process for server updates
The process which is depicted in the above figure 2 can be used with any of the update scenarios which are mentioned in the previous section. Here’s the process flow.
- One of the important aspects of automating the deployment is to extract out the configuration files and make them as templates which can be configured through an automated process (e.g. shell, puppet, ansible). These configurations can be committed to a source repository like GitHub.
- When there is a new configuration change, update or upgrade is required, it will trigger a Jenkins job which will take the configurations from GitHub and the product binaries (if required), product updates and ESB artefacts from a nexus repository which will be maintained within your organization. Using these files, docker image will be created within this step.
- This docker image will be deployed into the staging environment and starts the containers depending on the required topology or deployment pattern.
- Once the containers are started, the test scripts (postman) are deployed into test client and starts the testing process automatically (newman)
- Once the tests are executed and results are clean, it will go to the next step
- Docker images will be created for the production environment and deploy the instances to the environment and start the docker containers based on the production topology
With the above mentioned process flows, you can implement a CI/CD process for your middleware layer. Even though you can merge these 2 processes into a single process and put some condition to branch out to 2 different paths, having 2 separate processes would make it easier to maintain.
If you are to implement this type of a CI/CD process for your middleware ESB layer, make sure that you are using the right ESB runtime with the following characteristics
- Small memory footprint
- Quick start up time
- Immutable runtime
- Stateless
The following medium post describes a pragmatic approach to move your middleware layer to microservices architecture along with CI/CD process
The below medium post discusses a practical implementation of a CI/CD process along with WSO2 EI (ESB)
https://medium.com/wso2-learning/how-to-build-a-ci-cd-pipeline-for-wso2-esb-wso2-ei-1f7ba3cc833d
The below medium post discusses a practical implementation of a CI/CD process for WSO2 API Manager
The below GitHub repository contains the source code of a reference implementation.