Java Serverless Services — Quarkus Microservice on OpenShift Container Platform

Grzegorz Smolko
AI+ Enterprise Engineering
6 min readSep 3, 2020

Learn how to deploy Quarkus microservice as serverless on the OpenShift Container Platform

The serverless computing architecture has the following attributes: the ability to run just the business logic code with very small disk and memory footprint and not needing any “server” configuration plus the ability to scale down to zero after use, so the service only runs when is called (it starts automatically, it scales on demand and it can scale down to zero again). To support that you need a platform that will be able to respond to such calls and also an efficient serverless application runtime that will be able to react extremely fast so no delays are experienced.

In the Red Hat OpenShift Container Platform, serverless capabilities are provided via the Red Hat OpenShift Serverless service. It is based on the open source Knative project and allows applications to be packaged as Open Container Initiative containers. The OpenShift Serverless offers two components: Knative Serving and Knative Eventing. In this article we will focus on the Knative Serving component.

As we mentioned before, you will also need fast responding runtime in your container. Quarkus running in native mode is providing the Java platform with the serverless attributes described above.

In the previous article we discussed how to build a JAX-RS compatible microservice to run in Quarkus runtime. We will use that service and deploy it as serverless.

Enabling OpenShift Serverless

Before you can run your first serverless service on the OpenShift Container Platform, you need to perform a few installation steps that will be discussed in short in this section. You can read in more details about OpenShift Serverless and its requirements here.

First step is to install the OpenShift Serverless Operator. We will install it via the OpenShift Container Platform console, using Operator Hub.

Go to the Operators > OperatorHub and select the OpenShift Serverless Operator.

Install the operator accepting the default subscription configuration to install it in all namespaces on the cluster.

Wait a few moments till the operator is successfully installed. You can verify it in the Installed Operators page:

Once the operator is successfully installed we can configure the Knative Serving environment.

Start with creating the knative-serving namespace via console or command line issuing:

$ oc create namespace knative-serving

Next, go to the Installed Operators page in the console, select the OpenShift Serverless Operator. Make sure that the knative-serving project is selected. Then switch to the Knative Serving tab and click the Create Knative Serving button as show below:

For simple configurations accept all defaults and click Create. After that you will be redirected to the Knative Serving page again.

Click the newly created resource and scroll down to Conditions section to check if it has deployed successfully:

If your installation is successful, after reloading the console page, you should see the new Serverless menu in the tree:

Deploying your first serverless application

Once the platform is fully configured, you can deploy your application. Your application needs to be packaged as container and available via registry (public, private, OpenShift internal). We are using the StockQuote application that we built in the previous article, which image is already deployed to the OpenShift internal registry and available as an ImageStream. If you need details how build and deploy application image to the OpenShift registry you can check the previous article.

Create the yaml file that will define your service - svrless-stock-quote.yaml:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: svrless-stock-quote
spec:
template:
spec:
containers:
- image: >-
image-registry.openshift-image-registry.svc:5000/stock-quote-quarkus/minimal-stock-quote-quarkus:latest
ports:
- containerPort: 8080
env:
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: redis
key: redis.url
- name: MP_JWT_VERIFY_PUBLICKEY
valueFrom:
configMapKeyRef:
name: jwt-config
key: jwt-ca.crt
- name: MP_JWT_VERIFY_ISSUER
valueFrom:
configMapKeyRef:
name: jwt-config
key: mp.jwt.verify.issuer
- name: SMALLRYE_JWT_VERIFY_AUD
valueFrom:
configMapKeyRef:
name: jwt-config
key: smallrye.jwt.verify.aud

You need to define name and image location. In addition, we customized it providing specific container port and environment variables.

You can deploy it via OCP web console, oc or kn command line tools.

We will deploy using the console. Navigate to Serverless > Services, and select your project.

Click Create Service button and paste contents of the yaml file there:

Click Create button. Scroll down to the Conditions section to check if service was successfully deployed.

You can also see that Revision was created. Revision is an immutable object that reflects point-in-time snapshot of the code and configuration.

And Route was also created that defines your serverless application endpoint:

The platform automatically created a deployment object for the application, and as application currently did not received any traffic it is scaled to 0.

Lets generate some load — use the given route to call the application several times. You will not see any output, since the service is protected by the JWT token, but you will notice that deployment was automatically scaled to 1 and service was called.

After a short time of inactivity, it will automatically scaled down to zero.

Congratulations. You successfully deployed and tested serverless application on the OpenShift Container Platform.

How about versioning of the application?

Very often you will deploy new versions of your application. OpenShift Serverless gives you an easy way to do it. Whenever you will change the definition of the service object, a new revision is created. We emulated it by adding additional environment entry, but typically it will be a new image version.

Now, for example for canary testing of the new version, we will split traffic coming to the application in a 90/10% ratio.

Switch to the Developer view of the console, select Topology. In the topology view click your service and switch to the Resources tab.

Click Set Traffic Distribution do define the splitting percentage, tags for your version and select specific revisions:

After you save the changes, the topology view will be updated to show defined traffic rules.

You can now continue to use the old route endpoint, in which case traffic will be split according to the definition or use endpoints for the specific revision e.g.

http://v1-svrless-stock-quote-stock-quote-quarkus.gas-cluster1-a01ee4194ed985a1e32b1d96fd4ae346-0000.eu-de.containers.appdomain.cloudhttp://v2-svrless-stock-quote-stock-quote-quarkus.gas-cluster1-a01ee4194ed985a1e32b1d96fd4ae346-0000.eu-de.containers.appdomain.cloud

After you finish the testing, you may remove the old revision from the traffic definition.

Summary

In this blog, we showed how to configure OpenShift Container Platform to run serverless workload, deploy containerized application as serverless service and define traffic among several application revisions.

--

--