Dynamic load balancing for Docker based Java EE microservices on Oracle Container Cloud

In this blog we will look at how to run a Docker based Java EE microservice in HA/load-balanced mode using HAProxy — all this on the Oracle Container Cloud. Here is a quick overview

  • Java EE microservice using Wildfly Swarm: a simple (JAX-RS based) REST application
  • HAProxy: we will use it for load balancing multiple instances of our application
  • Docker: our individual components i.e. our microservice and load balancer services will be packaged as Docker images
  • Oracle Container Cloud: we will stack up our services and run them in a scalable + load balanced manner on Oracle Container Cloud

Application

The application is a very simple REST API using JAX-RS. It just fetches theprice for a stock

@GET     
public String getQuote(@QueryParam("ticker") final String ticker) {
 Response response = ClientBuilder.newClient().                   target("https://www.google.com/finance/info?q=NASDAQ:" + ticker).                 request().get();         
if (response.getStatus() != 200) {
return String.format("Could not find price for ticker %s", ticker);
}
String tick = response.readEntity(String.class);
tick = tick.replace("// [", "");
tick = tick.replace("]", "");
return StockDataParser.parse(tick)+ " from "+ System.getenv("OCCS_CONTAINER_NAME");
}

Wildfly Swarm is used as the (just enough) Java EE runtime. We build a simple WAR based Java EE project and let the Swarm Maven plugin weave its magic — it auto-magically detects and configures required fractions and creates a fat JAR from your WAR.

<build>
<finalName>occ-haproxy</finalName>
<plugins>
<plugin>
<groupId>org.wildfly.swarm</groupId>
<artifactId>wildfly-swarm-plugin</artifactId>
<version>1.0.0.Final</version>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
<compilerArguments>
<endorseddirs>${endorsed.dir}</endorseddirs>
</compilerArguments>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.3</version>
<configuration>
<failOnMissingWebXml>false</failOnMissingWebXml>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.6</version>
<executions>
<execution>
<phase>validate</phase>
<goals>
<goal>copy</goal>
</goals>
<configuration>
<outputDirectory>${endorsed.dir}</outputDirectory>
<silent>true</silent>
<artifactItems>
<artifactItem>
<groupId>javax</groupId>
<artifactId>javaee-endorsed-api</artifactId>
<version>7.0</version>
<type>jar</type>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>

Alternatives: you can also look into other JavaEE based fat JAR style frameworks such as Payara Micro, KumuluzEE, Apache TomEE embedded etc.

Let’s dive into the nitty gritty….

Dynamic load balancing

Horizontal scalability with Oracle Container Cloud is extremely simple — all you need to do it spawn additional instances of your application. This is effective when we have a load balancer to ensure that the consumers of application (users or other applications) do not have to deal with the details of the individual instances — they only need to be aware of the load balancer co-ordinates (host/port). Thw problem is that our load balancer will not be aware of the newly spawned application instances/containers. Oracle Container Cloud helps create a unified Stack where both the back end (REST API in our example) and the (HAProxy) load balancer components are configured as a single unit and can be managed and orchestrated easily as well as provide a recipe for a dynamic HAProxy avatar

HAProxy on steroids

We will make use of the artifacts in the Oracle Container Cloud Github repository to build a specialized (Docker) HAProxy image on top of the customized Docker images for confd and runit. confd is a configuration management tool and in this case its used to dynamically discover our application instances on the fly. Think of it as a mini service discovery module in itself which queries the native Service Discovery within Oracle Container Cloud service to detect new application instances


Configuring our application to run on Oracle Container Cloud

Build Docker images

We will first build the required Docker images. For the demonstration, I will be using my public registry (abhirockzz) on Docker Hub. You can choose to use your own public or private registry

Please ensure that Docker engine is up and running

Build the application Docker image

Dockerfile below

FROM anapsix/alpine-java:latest 
RUN mkdir app WORKDIR "/app"
COPY target/occ-haproxy-swarm.jar .
EXPOSE 8080
CMD ["java", "-jar", "occ-haproxy-swarm.jar"]

Run the following command

docker build –t <registry>/occ-wfly-haproxy:<tag> . 
e.g. docker build –t abhirockzz/occ-wfly-haproxy:latest .

Build Docker images for runit, confd, haproxy

We will build the images in sequence since they are dependent.To begin with,

  • clone the docker-images Github repository, and
  • edit the vars.mk (Makefile) in the ContainerCloud/images/build directory to enter your Docker Hub username

Now execute the below commands

cd ContainerCloud/images cd runit make image cd ../confd make image cd ../nginx-lb make image

Check your local Docker repository

Your local Docker repository should now have all the required images

Push Docker images

Now we will push the Docker images to a registry (in this case my public Docker Hub registry) so that they can be pulled from Oracle Container Cloud during deployment of our application stack. Execute the below commands

Adjust the names (registry and repository) as per your setup

docker login 
docker push abhirockzz/occ-wfly-haproxy
docker push abhirockzz/haproxy
docker logout

Create the Stack

We will make use of a YAML configuration file to create the Stack. It is very similar to docker-compose. In this specific example, notice how the service name (rest-api) is referenced in the lb (HAProxy) service

This in turn provides information to the HAProxy service about the key in the Oracle Container Cloud service registry which is actually used by the confd service (as explained before) for auto-discovery of new application instances. 8080 is nothing but the exposed port and it is hard coded since it’s also a part of the key within the service registry.

Start the process by choosing New Stack from the Stacks menu

Click on the Advanced Editor and enter the YAML content

You should now see the individual services. Enter the Stack Name and click Save

Initiate Deployment

Go back to the Stacks menu, look for the newly created stack and click Deploy

In order to test the load balancing capabilities, we will deploy 3 instances our rest-api (back end) service and stick with one instance of the lb (HAProxy) service

After a few seconds, you should see all the containers in RUNNING state — in this case, its three for our service and one for the ha-proxy load balancer instance

Check the Service Discovery menu to verify that each instance has its entry here. As explained earlier, this is introspected by the confd service to auto-detect new instance of our application (it would automatically get added to this registry)


Test

We can access our application via HAProxy. All we need to know is the public IP of the host where our HAProxy container is running. We already mapped port 8886 for accessing the downstream applications (see below snapshot)

Test things out with the following curl command

for i in `seq 1 9`; do curl -w "\n" -X GET "http://<haproxy-container-public-IP>:8886/api/stocks?ticker=ORCL"; done

All we do is invoke is 9 times, just to see the load balancing in action (among three instances). Here is a result. Notice that the highlighted text points to the instance from which the response is being served — it is load balanced equally among the three instances

Scale up… and check again

You can simply scale up the stack and repeat the same. Navigate to your deployment and click Change Scaling

After sometime, you’ll see additional instances of your application (five in our case). Execute the command again to verify the load balancing is working as expected

That’s all for this blog post.

Cheers!

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.
Like what you read? Give Abhishek Gupta a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.