Helidon MP on Google Kubernetes Engine

Alexis MP
Google Cloud - Community
5 min readSep 20, 2018

Look ma, no YAML*, no kubectl!

With years of experience operating Kubernetes in production Google now offers GKE (Kubernetes Engine), a fully-managed environment that enables developers to use just about any tools and frameworks packaged as a container.

For Java developers this provides an opportunity to use a lightweight framework such as the newly-announced Helidon and in particular its Microprofile flavor (but really, the following steps apply to many similar frameworks, regardless of the language used).

Let’s start with some Maven scaffolding (using Cloud Shell in my case since it comes with all the right prerequisites) :

$ mvn archetype:generate -DinteractiveMode=false \
-DarchetypeGroupId=io.helidon.archetypes \
-DarchetypeArtifactId=helidon-quickstart-mp \
-DarchetypeVersion=0.9.1 \
-DgroupId=io.helidon.examples \
-DartifactId=quickstart-mp \
-Dpackage=io.helidon.examples.quickstart.mp

Just to make sure the sample application generated is properly working, let’s build, dockerize the app, and test it locally :

$ cd quickstart-mp
$ mvn package
$ docker build -t quickstart-mp target
$ docker run — rm -p 8080:8080 quickstart-mp:latest

(alternatively you could explore using jib when containerizing your Java applications)

At this point you can test using curl in another window or use the Preview feature of Cloud Shell :

(make sure to append /greet to the URL)

We’re now ready to tag the image and push it to the Google Cloud Registry :

$ gcloud auth configure-docker
$ docker tag quickstart-mp eu.gcr.io/my-gcp-project/quickstart-mp
$ docker push eu.gcr.io/my-gcp-project/quickstart-mp

(note that I am using the European registry here. Also make sure to use your own project name in the URI above)

Time to deploy the image to a running GKE cluster! From your GCP console, navigate to Kubernetes Engine > Clusters and select “Deploy container”. That’s right, no need to provision a cluster beforehand. You could deploy to an existing cluster of course.

Simply use “Select Google Container Registry image”, select the image we’ve just pushed :

… and give the app a name (“hello-helidon” sounds like a reasonable choice).

Select a location for your GKE cluster (and if you fancy, check out the YAML which you didn’t have to write yourself) :

(again, I chose a location in Europe to be consistent with the registry choice made earlier). Finally, click “Deploy”.

Within a few minutes (most of it being taken by the creation of the cluster), you should see your image deployed to a freshly-created GKE cluster :

The deployment used a three-node GKE cluster. If you’d like to use an existing cluster or create a custom one (say with TPUs attached), simply create a new one and use it as a target for the image deployment. Deployments to an existing cluster will be much quicker since there is no cluster to set up.

We’re almost there! As encouraged by the message in the console, we will finally expose our deployment by creating a service :

… by mapping port 80 to the 8080 image target port :

Again, here you can check out the YAML that will be applied, both for inspiration or for reuse from the command line or in your favorite “infrastructure-as-code” tool. When ready, click “Expose”.

Google Cloud will then provision a load balancer with an external IP address and provide you with an “External endpoint” to access your application.

Note that this was all done without having to write any YAML and without using the kubectl CLI, which I would think is a good first experience for many.

To recap, all of this was simply a two-step process:

  1. create a deployment by pointing to an image
  2. expose the deployment as a service

Both of these steps can naturally be scripted. Also, running on a production GKE cluster means that you do have all the power of the underlying tools at your disposal, as shown below :

$ gcloud container clusters get-credentials hello-helidon-cluster \ 
— zone europe-west4-d \
— project my-gcp-project
Fetching cluster endpoint and auth data.
kubeconfig entry generated for hello-helidon-cluster.
$ kubectl get service hello-helidon-service -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-09-17T14:08:50Z
labels:
app: hello-helidon
name: hello-helidon-service
namespace: default
resourceVersion: "4012"
selfLink: /api/v1/namespaces/default/services/hello-helidon-service
uid: 33bead4c-ba83-11e8-bc3a-42010a8000b1
spec:
clusterIP: 10.11.240.64
externalTrafficPolicy: Cluster
ports:
- nodePort: 31509
port: 80
protocol: TCP
targetPort: 8080
selector:
app: hello-helidon
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.224.34.121
$ curl http://35.224.34.121/greet/GCP
{"message":"Hello GCP!"}

At this point, you can use rolling updates to deploy your perpetually-evolving image, (auto)scale your GKE cluster, and of course leverage the rest of Google Cloud Platform, from storage solutions, to Machine Learning APIs, and more.

--

--