Running Corda DLT with Kubernetes

Markus Hjort
Coinmonks
4 min readMay 16, 2018

--

At Tomorrow Labs we are building a digital real estate trading platform with a bunch of Finnish banks. Instead of building it as a traditional centralised application the goal is to build a more decentralised system using Blockchain technologies. The platform we are using is a Distributed Ledger (DLT) product called Corda. This is a tutorial on how to setup a Corda test network using Docker containers running in Kubernetes cluster.

We are using the latest Corda Open Source version 3.1 with H2 database. Our Docker image pipeline has two phases: first step is to build a base Corda image without any CorDapps, the second one is to build an final image containing the CorDapp jars. The base Dockerfile looks like this:

That image is then tagged with tag tomorrow/corda:latest. For running the Corda we use a shell script which starts Corda node in a tmux session. That way it is easy to attach into Corda Shell later on if needed. The run-corda.sh looks like this:

The next step is to build the CorDapps jar. We use gradle for that and copy the jars to folder build/cordapps. Then the next step is to build the final image with following Dockerfile.

Great! Now we have a Docker image that can be run in Kubernetes. For running a Corda test network we have to create configurations for all the nodes and then generate certificates and few other necessary files. See instructions here for more details. In our case we need three configuration files notary.conf, partya.conf and partyb.conf. This is a minimal Corda setup with two parties and one node acting as a non-validating notary. Configs are stored in a folder called config and they look like this:

Then we run Corda Network Bootstrapper tool for generating the necessary files for the network. You can download the tool from here and run it using a following command.

java -jar network-bootstrapper-corda-3.0.jar config

We chose to use Kubernetes secrets for storing configurations, certificates and other necessary files for nodes. This is the command we use for creating Kubernetes secrets for PartyA configs:

Note! We also use own log4j configuration file so that it is possible to configure logging. Certificates for PartyA are stored with following command:

The commands for creating secrets for other nodes are similar. The last tricky part is distributing the node info files to all nodes. The Corda 3.x network model requires that those files are available in all the nodes. For that reason we create one secret that will be used in all the nodes. It is created with a following command:

Those cryptic looking file names are node hashes Corda bootstrapper generates based on your node configurations. So remember to change the names based on what the bootstrapper generated. Now finally we can configure the deployments for Kubernetes. This is an example for PartyA node:

Few things to mention from this config:

  • For p2p networking it is enough if you open the 10002 port but we need the RPC port 10003 for external access (we have a separate REST API for using Corda).
  • Hostname is a very important to configure correctly
  • This is not a HA config. Open source version of Corda does not support it. So using Recreate as a deployment strategy makes most sense. It prevents two pods trying to run in same address at same time.
  • The image should be changed to an image that was created earlier. In our case it was dias/app:1.0.116. It has to also point to a Docker repository that your Kubernetes cluster can access. We use a private registry and we used these instructions for configuring the access.
  • The memory limits are probably a bit high. We haven’t tuned them yet.

The configurations for other nodes look similar. After we created them the last step is to just apply the all configs with command:

In theory network map service (notary node in this case) should be started before other nodes so that they can connect to it when starting up. In practice nodes wait for the connection for a while and for us that have been always enough. It just have been simpler to start them all at once. The last step before profit is to test the network setup works by connecting to any node. It works like this:

This should show all nodes connected to network. In this case three nodes. Remember to exit the tmux session using Ctrl+b following with d. The caveat with tmux is that you should not kill the session if you don’t want to kill your app.

The setup was a bit longish and not that simple. At some point R3 will provide a network where you can connect to. However, currently the Kubernetes is a good way for running your own test network. And after the initial setup it is easy to update applications and it works smoothly. We run our test network with Google Kubernetes Engine but we don’t use any Google specific features. This setup should work with any Kubernetes implementation.

P.S. If I have time at some point it might be a good idea to create a Github repo for the Docker and Kubernetes setup. It seems there are many people struggling with this setup.

--

--

Markus Hjort
Coinmonks

Lead Software Engineer at Tomorrow Labs. Interested in Clojure and load testing. github.com/mhjort. My old blog can be found from here: jroller.com/mhjort