Wiring Microservices, Integration Microservices & APIs

Composing a Microservices based Enterprise Solution with MSF4J, BallerinaLang and WSO2 API Manager

Implementing a Microservices based solution can be quite complex depending on the number of independent services that it may require, the number of integration services required at the integration layer, the level of control it needs for inter-service communication including security, throttling, monitoring, analytics, etc, and finally API management capabilities needed at the API layer. Recently, at WSO2 we implemented a proof of concept (POC) solution for demonstrating how such a system can be implemented using Microservices Framework for Java (MSF4J), BallerinaLang and WSO2 API Manager using a container cluster manager. In this blog post, I will explain how each layer was implemented and deployed on OpenShift using Docker with proper build tools and Kubernetes resources.

Solution Architecture

The above diagram illustrates the solution architecture of this microservices POC. This solution implements a loan applications service for an imaginary bank for allowing their customers to apply for loans online. At the services layer, there are three main services implemented. The customers service provides a RESTful interface for managing customer information with CRUD operations. The loans service keeps track of information of loans taken by each customer. Once a loan is taken, a credit record will be added to the customer profile via the credits service. This credits service may also keep track of any other credits taken by the customers via credit cards and other credit channels.

At the storage layer, we have used three dedicated databases for the customers service, loan service, and the credit service by adhering to the microservices architecture principles. There are three other databases used by the WSO2 API Manager, API Manager Analytics and Business Process Server for managing API, API analytics and business process data. The Business Process Server has been used in this solution to demonstrate workflow management features of the API manager. Otherwise, it’s an optional component. Moreover, a central user store database has been used for allowing users to log in to all WSO2 components using the same credentials. If required this can also be connected to an LDAP server.

Microservices

The microservices in this solution have been implemented according to Java API for RESTful Web Services (JAX-RS) specification using MSF4J. The database communication has been implemented using plain SQL via JDBC according to Data Access Object (DAO) design pattern. Standard JAX-RS and Swagger annotations have been used for automatically generating Swagger API definitions via the same service interface using a specific context path (/<api-context>/swagger). The implementation of the GET all customers service method of the customers service can be seen below:

@Path("/")
public class CustomersService {

private static final Log logger = LogFactory.getLog(CustomersService.class);

@GET
@Path("/")
@Produces(MediaType.APPLICATION_JSON)
@ApiOperation(
value = "Return all customers",
notes = "Returns HTTP 404 if customer doesn't exist")
@ApiResponses(value = {
@ApiResponse(code = 200, message = ""),
@ApiResponse(code = 404, message = "Particular exception message")})
public Response getCustomer() {

logger.info("HTTP GET / resource invoked");
CustomerDAO customerDAO = new CustomerDAO();
List<Customer> customers = customerDAO.getCustomers();

if (customers != null) {
return Response.status(Response.Status.OK).entity(customers).build();
} else {
return Response.status(Response.Status.NOT_FOUND).entity("").build();
}
}
...
}

These microservices are built using Maven and the resulting JAR files are self-contained for hosting the services themselves. MSF4J internally uses Netty, which is a high performant, asynchronous event-driven network application framework for exposing these services via HTTP. The Fabric8 Docker Maven plugin has been used for generating the Docker images with instructions for specifying the container user id and the java command to start the service listeners:

<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<configuration>
<images>

</images>
</configuration>
...
</plugin>

Out of the box MSF4J services expose service listeners via HTTP. As a security measure it would be better to use SSL end to end including internal communication within the container cluster manager. This would to avoid exposing data in plaintext in the network in case an intruder get access to a container due to a vulnerability in the system. HTTPS can be enabled in the microservices by specifying the SSL configuration via the Netty configuration file (netty-transports.yml) as follows:

listenerConfigurations:
-
id: "msf4j-http"
host: "0.0.0.0"
port: 8080
bossThreadPoolSize: 2
workerThreadPoolSize: 250
execHandlerThreadPoolSize: 60
parameters:
-
name: "execThreadPoolSize"
value: 60
-
id: "msf4j-https"
host: "0.0.0.0"
port: 8043
bossThreadPoolSize: 2
workerThreadPoolSize: 250
execHandlerThreadPoolSize: 60
scheme: https
keyStoreFile: wso2carbon.jks
keyStorePass: wso2carbon
certPass: wso2carbon

Once the Netty configuration file is written it can be specified in the service statup command as follows using a Java system property:

java -jar -Dtransports.netty.conf=netty-transports.yml maven/customers-service-${project.version}.jar

Each microservice Docker image has been deployed on OpenShift using a Kubernetes deployment definition. The database configurations and credentials have been passed to the container using environment variables:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name:
customers-deployment
labels:
app:
customers
spec:
strategy:
type:
Recreate
template:
metadata:
labels:
app:
customers
spec:
containers:
- image: imesh/wso2-microservices-poc-customers-service:0.3
name: customers
imagePullPolicy: IfNotPresent
env:
-
name: JDBC_DRIVER
value: com.mysql.jdbc.Driver
-
name: JDBC_URL
value: jdbc:mysql://mysql-server:3306/customer_db
-
name: DB_USER
value: root
-
name: DB_PASSWORD
value: root
ports:
- containerPort: 8080
name: customers
serviceAccountName: wso2svcacct

Once the container is deployed the container ports get exposed to the API layer via Kubernetes services. Each microservice defines a Kubernetes service with the default service type Cluster IP for internal routing. This would include integration microservices to microservices communication, APIs to integration microservices communication and APIs to microservices communication. Once a service is created, a DNS entry will be created with the given service name and consequently, other components of the system will be able to communicate with the service using the service name and the service port:

apiVersion: v1
kind: Service
metadata:
name:
customers
labels:
app:
customers
spec:
type:
ClusterIP
ports:
- port: 8080
selector:
app:
customers

Integration Microservices

The loan applications integration microservice has been implemented using BallerinaLang v0.93 to be able to talk to customers, credit and loan services. When a customer applies for a loan, the loan applications integration microservice will first talk to the customer service and validate the customer. If the customer is valid then it will talk to the credit service and verify the remaining credit amount. If the requested loan amount is less than or equal to the remaining credit amount the loan application will be approved and a loan record will be created via the loan service. At the end of this process, a new credit record will be created via the credit service. To demonstrate how this integration service logic works info logs have been added.

The Docker image of the loan applications integration microservice was created using a Dockerfile for defining the container user id and providing required filesystem permissions according to OpenShift container image creation guidelines without using Ballerina docker build command:

FROM openjdk:8
MAINTAINER WSO2 Docker Maintainers "dev@wso2.org"
ENV DEBIAN_FRONTEND noninteractive

ARG USER=wso2user
ARG USER_ID=1000000000
ARG BAL_HOME=/ballerina/
RUN mkdir ${BAL_HOME}

COPY files/ballerina-0.93.zip ${BAL_HOME}
COPY loanApplicationsService.balx ${BAL_HOME}

WORKDIR ${BAL_HOME}

RUN unzip ballerina-0.93.zip \
&& useradd --system --uid ${USER_ID} --gid 0 --no-log-init ${USER} \
&& chown -R ${USER} ${BAL_HOME} \
&& chmod -R 0774 ${BAL_HOME} \
&& chgrp -R 0 ${BAL_HOME} \
&& chmod -R g=u ${BAL_HOME} \
&& rm ballerina-0.93.zip

USER ${USER_ID}

CMD ballerina-0.93/bin/ballerina run loanApplicationsService.balx

Similar to microservices, a Kubernetes deployment and a service definition have also been defined for the integration microservice. In addition, the URLs of underlying microservices have been provided to the container using environment variables:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name:
loan-applications-deployment
labels:
app:
loan-applications
spec:
strategy:
type:
Recreate
template:
metadata:
labels:
app:
loan-applications
spec:
containers:
- image: imesh/wso2-microservices-poc-loan-applications-service:0.3
name: loan-applications
imagePullPolicy: IfNotPresent
env:
-
name: LOANS_SERVICE_URL
value: http://loans:8080
-
name: CUSTOMERS_SERVICE_URL
value: http://customers:8080
-
name: CREDITS_SERVICE_URL
value: http://credits:8080
ports:
- containerPort: 9090
name: default
serviceAccountName: wso2svcacct

APIs

As illustrated in the solution architecture diagram the loan applications integration microservice has been exposed to the client applications via WSO2 API Manager. This is named as loan applications API. In addition, the customers microservice has also exposed as an API with the name customers API. The deployment process of the APIs has been automated using the API import/export web application and the API manager CLI. The loan applications API definition, customers API definition and a collection of other sample API definitions have been exported and added to the git repository as zip files. Once the entire deployment is completed, via the API manager CLI these APIs can be published.

Postman API Client

A Postman project has been created for testing this solution starting from the OAuth token generation step till the loan application creation step. To start with, refer the Getting Started guide and deploy this solution in an OpenShift environment. In this process check the logs of each container to ensure that they are healthy and active. Once all containers are started successfully import the APIs found in the apis directory using the API manager CLI. Thereafter, login to the API Store and create an API application and subscribe to both loan applications API and customers API. Once that is completed, install the Postman client and import the Postman project provided. This includes following operations:

  • Generate OAuth Token using Password Grant Type
  • Refresh OAuth Token
  • Revoke Token
  • Create Customer
  • Get All Customers
  • Create Loan Application
  • Get All Loan Applications
  • Check Loan Application Status
  • Approve Loan Application
  • Reject Loan Application

First generate an OAuth token using the password grant type. Then update the given Postman environment with the OAuth token generated. Once this is done all other APIs resources will be able to use this token without having to add the token to each resource. Next, create a customer by invoking the create customer resource. Once the customer is created the response message will return a customer id. Thereafter, use the above customer id and apply for a loan for a specific amount. The message format for each of these requests can be found in the Postman project itself. While invoking these APIs, monitor the console output of each container to observe their behaviour.