Running Axonserver Enterprise Edition on the Cloud (Kubernetes)

juan buhagiar
etpa
Published in
5 min readJul 14, 2022

Recently, I started writing some stories about how Energy Trading Platform Amsterdam — ETPA migrated into AWS. In my previous story, I described ETPA’s journey to Kubernetes. One of the main elements of our migration was our Event Store since we are using the CQRS design pattern in our backend.

Axonserver running in the cloud. Photo by engin akyurt on Unsplash

Axonserver

We are using Axonserver as our Eventstore this goes hand in hand with Axonframework which provides a Java API for writing DDD, CQRS, and Event Sourcing applications. Years ago we were using MariaDB as an event store but we had many difficulties and decided to migrate to Axonserver.

Axoniq, the creators of Axonserver describe it as follows:

Axon Server is designed to meet all of the infrastructure needs of an Axon application. Axon Server is the easiest and most robust way to meet these infrastructure concerns because it just works out of the box. Specifically, Axon Server supports purpose-built event storage, routing, manual scaling of tracking processors, event store queries, basic monitoring, basic security, and basic messaging interoperability.

We wanted to run Axonserver on our new Kubernetes environment, this meant that we had to do a number of steps. Initially, we had to containerize Axonserver using a docker image. Then we created the necessary Kubernetes structure using Helm Charts & finally, we deployed the new helm chart in our Kubernetes environment. Once Axonserver was running on Kubernetes we started a data migration to retain the data from our previous VPS environment.

The Axonserver cluster works in one primary, and two secondary node configurations. The apps connect to the primary node. We have tested scenarios in which where one node fails and observed that Kubernetes can automatically update and create a new node that can join the rest of the Axonserver cluster. We don’t need any autoscaling capabilities to increase node size further than the default configuration.

The Docker Image

Axonserver has two flavors standard edition (SE) & enterprise edition (EE). We are using the enterprise edition at ETPA as we are clustering the event store for reliability purposes. The Standard edition does not allow clustering of the event store. There are currently public docker images for AxonServer here, but we decided to create our own docker image. We create a docker image as follows:

FROM busybox as source

ARG AXONSERVER_VERSION

RUN addgroup -S -g 1001 axon-server \
&& adduser -S -u 1001 -G axon-server -h /axonserver -D axon-server \
&& mkdir -p /axonserver/config /axonserver/data /axonserver/ssl \
/axonserver/security /axonserver/log /axonserver/exts \
&& chown -R axon-server:axon-server /axonserver

RUN wget https://download.axoniq.io/axonserver/axonserver-enterprise-${AXONSERVER_VERSION}-bin.zip -P /axonserver \
&& unzip /axonserver/axonserver-enterprise-${AXONSERVER_VERSION}-bin.zip -d /axonserver \
&& mv /axonserver/axonserver-enterprise-${AXONSERVER_VERSION}/axonserver.jar /axonserver/axonserver.jar \
&& rm -fR /axonserver/axonserver-enterprise-${AXONSERVER_VERSION}

FROM gcr.io/distroless/java:11

COPY --from=source /etc/passwd /etc/group /etc/
COPY --from=source --chown=axon-server /axonserver /axonserver
COPY --from=source /bin/sh /bin/sh
COPY --from=source /bin/tar /bin/tar
COPY --from=source /bin/ls /bin/ls
COPY --from=source /bin/vi /bin/vi

USER axon-server
WORKDIR /axonserver

VOLUME [ "/axonserver/config", "/axonserver/data", "/axonserver/ssl", \
"/axonserver/security", "/axonserver/log", "/axonserver/exts" ]

EXPOSE 8024/tcp 8124/tcp 8224/tcp

ENTRYPOINT java -jar -Xmx12g axonserver.jar

The StatefulSet Object

The main element of the helm chart is the Statefulset. The Statefulset is a Kubernetes API Object that is used to manage stateful applications. Our event store is a stateful application and it is important that we manage the data ie. the volumes associated with this application. The Statefulset uses the docker image defined to run the container as multiple replicas with the right computing and storage resources. The initial Statefulset looks like this:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver-ee
spec:
replicas: 3
serviceName: axonserver-ee-service
template:
spec:
containers:
- name: axonserver-ee
image: "axoniq/axonserver-enterprise"
imagePullPolicy: Always
ports:
- name: gui
containerPort: 8024
protocol: TCP
- name: grpc
containerPort: 8124
protocol: TCP
- name: internal
containerPort: 8224
protocol: TCP
affinity:
...
env:
...
volumeMounts:
...
readinessProbe:
...
volumes:
...
volumeClaimTemplates:
...

Using the right Compute resources

We started off by running the event store with 3 replicas. Each replica should run on a specific node group and for reliability reasons, each replica was assigned to a node in a different availability zone on the AWS data centers. This can be achieved by using affinity and anti-affinity rules as follows:

affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- axonserver-ee
topologyKey: "topology.kubernetes.io/zone"
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: alpha.eksctl.io/nodegroup-name
operator: In
values:
- ng-axon-server

The anti-affinity rule above is stating that replicas should not co-exist on the same topology.kubernetes.io/zone which is an annotation inserted by AWS to the nodes. While the affinity rules is starting to put the pods in the ng-axon-server node group.

Storing the data in Volumes

The main feature of stateful sets is the ability to provide a sticky identity for each of their Pod which allows us to associate a volume to each node. This would solve the issue of losing data in case a pod fails and has to be rescheduled. This can be achieved by setting the volumeClaimTemplate property. More information can be found here.

We are using the AWS EBS CSI driver to manage the lifecycle of our Kubernetes volumes on AWS. This works great for us to create volumes, create snapshots for backups, and restore backups. It is also helpful when we would like to create a Kubernetes test environment with prefilled data.

volumeClaimTemplate:
- metadata:
name: log
spec:
storageClassName: aws-csi
resources:
requests:
storage: 10Gi
- metadata:
name: data
spec:
storageClassName: aws-csi
resources:
requests:
storage: 500Gi

Configuring the Axonserver

Configuration is another important element of the Axonserver inside the Kubernetes structure. Configuration can be done in different ways since Axonserver is a spring boot-based application. One way to set is through environment variables. This method is preferred when setting sensitive data as we can use the Kubernetes Secrets to set the value of the environment variable.

First, create the secret object:

apiVersion: v1
kind: Secret
metadata:
name: admin-token
type: Opaque
stringData:
admin.token: "abcd"

In the stateful set reference the secret for the environment variable:

env:
- name: AXONIQ_AXONSERVER_ACCESSCONTROL_ADMIN_TOKEN
valueFrom:
secretKeyRef:
name: admin-token
key: admin.token

Another way we can set properties for Axonserver in Kubernetes is by mounting a configuration file through a config map.

Create a config map with all the files properties:

apiVersion: v1
kind: ConfigMap
metadata:
name: axonserver-properties
data:
axonserver.properties: |
axoniq.axonserver.event.storage=./data
axoniq.axonserver.ssl.enabled=true
volumeMounts
- name: properties
mountPath: /axonserver/config/axonserver.properties

Reference the config map using volumes and mounts in the Statefulset:

volumes:
- name: properties
configMap:
name: axonserver-properties
volumeMounts:
- name: properties
mountPath: /axonserver/config/axonserver.properties
subPath: axonserver.properties
readOnly: true

I will continue to publish different articles about the different parts of our journey to the cloud. If you would like to hear about a specific topic please feel free to reach out! ETPA is also hiring! If you would like to be part of our journey please apply for our positions through LinkedIn or by visiting our website.

--

--