Setting up a Corda Kubernetes deployment network for developers using Docker

Henrik Carlström
5 min readMar 13, 2019

--

Do you want to set up a development environment for a set of Corda Nodes? If so, this blog post will help you achieve just that. I will explain how to set up a Kubernetes stack which deploys multiple Corda Nodes that can connect to each other. The deployment will also contain a service which provides a Network Map, an auto-accept Identity Service and a non-validating Notary. This makes it very easy to set up a development environment to be shared with multiple developers, for example, for performing smoke testing.

Since the separate service provides an auto-accept Identity Service, which lets any node join the network, it goes without saying that this is meant purely for in-house development use and not for production.

About Corda

Corda is an open source blockchain project, designed for business from the start. Only Corda allows you to build interoperable blockchain networks that transact in strict privacy. Corda’s smart contract technology allows businesses to transact directly, with value. Corda homepage: https://www.corda.net/ and documentation.

For those who are familiar with version 3.x of Corda, you can also read Kat’s excellent blog post about the new features in version 4.0.

Related content

The related code can be found in this GitHub repository.

The setup I used was Docker for Windows with the built-in Kubernetes support.

As mentioned earlier, this setup relies on a separate service which is created by Stefano Franz (aka Roastario), the service is named “spring-boot-network-map”.

In addition, the code will download the latest Corda 4.0 binaries which at the time of writing is around 60 MB.

The Yo! CorDapp used as an example can be found in this GitHub repository.

Overview of repository contents

To get started, let’s have a look at the repository structure once cloned.

party-a
party-b
party-c

There are 3 folders, one for each participating Node in the network. The folders are identical and just serve as placeholders for creating the 3 docker images with the same names as the folders.

fetch_corda_jar.*

This script downloads the Corda open-source version 4 binaries and copies them to each node directory. This should be executed first.

build-docker-nodes.*

This script will first build the Docker images for the Nodes. Then it will launch the images in the Kubernetes cluster and print the status of the cluster. Last the script outputs a series of commands with which to gain Corda shell access.

docker-compose.yml

The Docker Compose file is the main configuration file for instructing Docker and Kubernetes to set up the different Nodes of the network.

Getting down to business

After cloning the repository, fetching the Corda binaries and starting up the stack, it is time to get down to business. We will start by gaining shell access to the running Corda node for party-a.

This is achieved by using the following command:

ssh -o StrictHostKeyChecking=no user1@localhost -o UserKnownHostsFile=/dev/null -p 2221

Now we will be in the Corda shell, where we can execute commands for the node. We can also execute new Flows, which are basically commands in the Corda world. In this case we will be utilising the Yo sample CorDapp which only allows Nodes to send Yo’s to other nodes.

We will initiate a new flow with the following command:

flow start YoFlow target: PartyB

This command will send a Yo from the current node (PartyA) to the recipient node (PartyB). Note that here we are referring to the X500 names of the nodes and not the previously mentioned Docker image names which are all lower-case. When a flow is executed, it will state the progress of executing the flow and once complete, the success of said flow.

Let’s try this again, this time from another node, party-c.

Let’s start by taking a new terminal and executing the following command:

ssh -o StrictHostKeyChecking=no user1@localhost -o UserKnownHostsFile=/dev/null -p 2223

And now that we are in the Corda nodes shell for PartyC, we can go ahead and send a Yo to PartyB again:

flow start YoFlow target: PartyB

At this point in time, we have had PartyA and PartyC send one Yo each to PartyB. Let us now check if PartyB has received those Yo’s or not.

Open a new terminal window and execute the following command:

ssh -o StrictHostKeyChecking=no user1@localhost -o UserKnownHostsFile=/dev/null -p 2222

Now that we are logged in to the Corda node shell for PartyB, we can go ahead and query the vault of the node to see if there are any Yo’s stored there or not.

Let us do this by executing the following command:

run vaultQuery contractStateType: net.corda.yo.YoState

When this vault query finishes it should return something like this:

states:
- state:
data: !<net.corda.yo.YoState>
origin: "O=PartyA, L=London, C=GB"
target: "O=PartyB, L=New York, C=US"
yo: "Yo!"
contract: "net.corda.yo.YoContract"
notary: "O=Notary Service, L=London, C=GB"
encumbrance: null
constraint: !<net.corda.core.contracts.HashAttachmentConstraint>
attachmentId: "3CBC4AB8BEC18532052AB568D1589DF4FF038160592F04E9F56EEAB79FEA70A1"
ref:
txhash: "1693C0F6F5DEF2757ECB90C146589BFEB6BDAE919A6DF922B4283407F65602C5"
index: 0
- state:
data: !<net.corda.yo.YoState>
origin: "O=PartyC, L=London, C=GB"
target: "O=PartyB, L=New York, C=US"
yo: "Yo!"
contract: "net.corda.yo.YoContract"
notary: "O=Notary Service, L=London, C=GB"
encumbrance: null
constraint: !<net.corda.core.contracts.HashAttachmentConstraint>
attachmentId: "3CBC4AB8BEC18532052AB568D1589DF4FF038160592F04E9F56EEAB79FEA70A1"
ref:
txhash: "CA86C6FA2190092F9748BA4256C01FD61A1C2F4C95712A62559DB00869456443"
index: 0
statesMetadata:
- ref:
txhash: "1693C0F6F5DEF2757ECB90C146589BFEB6BDAE919A6DF922B4283407F65602C5"
index: 0
contractStateClassName: "net.corda.yo.YoState"
recordedTime: "2019-03-05T15:51:47.526Z"
consumedTime: null
status: "UNCONSUMED"
notary: "O=Notary Service, L=London, C=GB"
lockId: null
lockUpdateTime: null
relevancyStatus: "RELEVANT"
constraintInfo:
constraint:
attachmentId: "3CBC4AB8BEC18532052AB568D1589DF4FF038160592F04E9F56EEAB79FEA70A1"
- ref:
txhash: "CA86C6FA2190092F9748BA4256C01FD61A1C2F4C95712A62559DB00869456443"
index: 0
contractStateClassName: "net.corda.yo.YoState"
recordedTime: "2019-03-05T15:51:54.396Z"
consumedTime: null
status: "UNCONSUMED"
notary: "O=Notary Service, L=London, C=GB"
lockId: null
lockUpdateTime: null
relevancyStatus: "RELEVANT"
constraintInfo:
constraint:
attachmentId: "3CBC4AB8BEC18532052AB568D1589DF4FF038160592F04E9F56EEAB79FEA70A1"
totalStatesAvailable: -1
stateTypes: "UNCONSUMED"
otherResults: []

If you look carefully you will see that there are two Yo’s stored in the vault and that they have been sent by two different parties, in this case PartyA and PartyC, which is what we were expecting as well.

Review

Let us look back at what we have done here.

We have a Git repository of a folder structure which illustrates a potential option of a deployment server. We then showed how to set up the folders with the appropriate dependencies, in this case the Corda binaries. After that we managed to compile Docker images from those folders and then start up the newly created images in the Kubernetes cluster as a new application stack. This stack is composed of 3 nodes and a service that provides connectivity between said nodes.

Once we had the network up and running we logged in to two nodes (PartyA and PartyC) and sent Yo’s to the third node (PartyB). Then we verified that the Yo’s had arrived successfully to the node’s vault.

This example illustrated what can be achieved to help developers set up a deployment environment where they can perform tests on their CorDapps before being deployed to a pre-production or production environment.

Please feel free to experiment with the setup used and create your own deployments.

P.S. R3 is hiring! See Dave Hudson’s post for more details.

--

--