Helm Chart for Fabric (for Kubernetes)

A Failure-Success

Daz Wilkin
Google Cloud - Community
5 min readAug 31, 2018

--

A summary of my disappointing failure attempting to create a Helm Chart deployment for Hyperledger Fabric (for Kubernetes). The purpose of this short series of stories is to document the achievements, describe the limitations and — hopefully — provide a coherent summary of the work to serve as the project’s documentation.

While Fabric is complicated to deploy across multiple hosts, I think the bulk of the challenges I’ve faced is in attempting to jam Fabric into a Helm Chart; I think Helm’s where I’m more challenged. I’m very grateful to Yacov and Gari at IBM for their ever-patient and helpful guidance on Fabric. Thanks both of you!

The Recommendation

I believe (strongly) that the Hyperledger Fabric project should commit to developing a working Helm Chart solution for Fabric. Fabric is a complex solution that changes significantly between its frequent releases. The core development team knows best how to configure and deploy the solution and is best able to keep the deployment solutions current with the core product.

Since Helm has become a de facto deployment tool for Kubernetes applications, I consider the Fabric team to be best-placed to identify Helm limitations and|or tweak Fabric to accommodate these.

The Success

It works ;-)

2x 2-Peer Orgs

And:

The Outstanding Problems

Unfortunately, to get chaincode instantiations to work requires manual intervention :-(

To instantiate chaincode, the peer uses docker-in-docker to create a Docker image of the chaincode deployment and then instantiates it:

NB The image name combines the network name (dev), the peer’s name (org1-peer0), the chaincode name (duvall) and version (1.0)

Monitoring Docker events for this image, we can detect when a container is created from the image and when the container is started:

NB The event output includes the long container ID (3e1b22…)

Before the container dies, we can grab its logs:

Which yields:

The issue arises becauses the container is run in the context of Docker Engine not “in” Kubernetes. It’s attempts to call back to the peer fail because Docker Engine is unable to address the peer.

Hack #1: Edit the Node’s Hosts File

The peer is available on 10.121.1.224for this deployment:

NB The address corresponds to the Cluster IP address for the Kubernetes Service that represents the peer.

Editing the Node’s (!) /etc/hosts permits Docker Engine to correctly address the Peer:

Now, if we rerun the instantiation, it will work:

And, we can verify using the container’s logs:

It’s unclear to me how best to appropriately, dynamically ensure that every Nodes’ (plural) Docker Engine is able to correctly refer to the Peers (plural) that may be running on it at that instant.

Once instantiated, to invoke methods on the chaincode, the peer needs to be able to access the orderer:

And, the container’s logs:

All good.

Hack #2: orderer.example.com

Except that it required a second hack.

This second issue is a consequence of the naming that’s defined for the network and expressed in crypto-config.yaml and configtx.yaml and (my) complexity in reflecting this appropriately in the cluster.

How does a Peer correctly address the Orderer when (my) Helm Chart results in the Orderer’s Service being available as e.g. x-hyperledger-fabric-orderer ([release-name]-[chart-name]-orderer) or fully-qualified as x-hyperledger-fabric-orderer.andromeda.svc.cluster.local?

The first hacky solution employed CoreDNS to provide example.com as a stub-domain (complementing Kubernetes’ on-cluster DNS resolution). My solution is over-engineered (see alternative below) but it provided me an opportunity to learn and use CoreDNS and in combination with Kubernetes. So a good learning experience and CoreDNS is a sweet product.

With this model, a Peer’s attempt to resolve e.g. orderer.example.com is shipped to CoreDNS and CoreDNS is programmed by the Kubernetes Services list to resolve orderer to the correct IP address.

I documented this approach here and won’t duplicate it.

The alternative (and much easier) hacky solution is to use Kubernetes’ hostAliases(link). These are defined as part of the manifest and results in the kubelet (?) programming the Pod’s (not the Node’s) /etc/hosts file.

Here’s an example manifest:

And here’s the Pod’s /etc/hosts:

NB The addition of orderer.example.com and a useful system-provided comment explaining why it’s there.

In this approach, the Peer needs to be configured with the Orderer’s IP which I’m challenged to solve using Helm (;-().

The Implementation

I’ll briefly summarize my programming notes early next week and reference these here. Hopefully, a summary of my journey will provide others with some guidance on how they may proceed and avoid pitfalls. Hopefully, my mistakes will elicit feedback from others to help me improve and overcome some of the challenges I faced.

Conclusion

I sub-titled this story a “Failure-Success” because, while I’m able to deploy Fabric to a Kubernetes cluster, I was unable to develop a Helm Chart that (a) supported the dynamism I wanted; (b) provided an end-to-end working deployment.

As my manager correctly explained, it’s best to fail-fast and, while it’s disappointing that I was unable to finish this project, hopefully this and related stories will help others and perhaps it can find consensus within the Fabric project to develop a Helm Chart (or other Kubernetes deployment) for Fabric.

That’s it for now.

--

--